The Inner Circle

 View Only
Expand all | Collapse all

NIST AI Risk Management Framework Aims to Improve Trustworthiness

  • 1.  NIST AI Risk Management Framework Aims to Improve Trustworthiness

    Posted Jan 26, 2023 08:25:00 AM
    Hi All,

    @Jim Reavis

    NIST AI Risk Management Framework Aims to Improve Trustworthiness

    NIST today released it's Artificial Intelligence Risk Management Framework (AI RMF 1.0), a guidance document for voluntary use by organizations designing, developing, deploying, or using AI systems to help manage the risks of AI technologies. The Framework seeks to cultivate trust in AI technologies and promote AI innovation while mitigating risk. The AI RMF follows a direction from Congress for NIST to develop the framework and was produced in close collaboration with the private and public sectors over the past 18 months.

    AI RMF 1.0 was released at a live-streamed event today with Deputy Secretary of Commerce Don Graves, Under Secretary for Technology and Standards and NIST Director Laurie Locascio, Principal Deputy Director for Science and Society in the White House Office of Science and Technology Policy Alondra Nelson, House Science, Space, and Technology Chairman Frank Lucas and Ranking Member Zoe Lofgren, and panelists representing businesses and civil society. A recording of the event is available here (https://www.nist.gov/news-events/events/2023/01/nist-ai-risk-management-framework-ai-rmf-10-launch).
    NIST also today released, for public comment, a companion voluntary AI RMF Playbook, which suggests ways to navigate and use the framework, a Roadmap for future work to enhance the Framework and its use, and the first two AI RMF 1.0 crosswalks with key AI standards and US and EU documents.

    NIST plans to work with the AI community to update the framework periodically and welcomes suggestions for additions and improvements to the Playbook at any time.

    Comments received through February 2023 will be included in an updated version of the Playbook to be released in spring 2023.

    Sign up to receive email notifications about NIST's AI activities here or contact us at: [email protected]. Also, see information about how to engage in NIST's broader AI activities.




    ------------------------------
    Michael Roza CPA, CISA, CIA, CC, MBA, Exec MBA
    ------------------------------


  • 2.  RE: NIST AI Risk Management Framework Aims to Improve Trustworthiness

    Posted Jan 27, 2023 11:35:00 PM

    I just perused today a white paper written by Jessica Newman, from UC Berkeley's Center for Long-Term Cybersecurity (CLTC), which adds an extra dimension to the NIST AI Risk Management Framework. 

    The report is entitled "A Taxonomy of Trustworthiness for Artificial Intelligence" and subtitled "Connecting Properties of Trustworthiness with Risk Management and the AI Lifecycle."

    https://cltc.berkeley.edu/publication/a-taxonomy-of-trustworthiness-for-artificial-intelligence/

    (no paywall, no signing in – how refreshing!)

    As the subtitle indicates, the report creates a mapping between the concepts of the NIST AI RMF, in particular the lifecycle stages it defines (Plan and Design, Collect and Process Data, Build and Use Model, Verify and Validate, Deploy and Use, Operate and Monitor, Use or Impacted By) and the "characteristics of trustworthiness" (valid and reliable, safe, fair, secure and resilient, explainable and interpretable, privacy-enhanced, accountable and transparent, responsible practice and use). If you can imagine the resulting matrix of 7 stages by 8 characteristics, the author then goes on to define a set of properties within each cell of this matrix – sometimes just one property, often two to four, in one case 26 of them – for a grand total of 150 distinct properties.

    The report also lists (and used as inputs) a number of existing frameworks for AI trustworthiness, and specifically highlights these:

    • The "Ethics Guidelines for Trustworthy AI" from the High-Level Expert Group on Artificial Intelligence
    • The EU AI Act, which we've discussed several times in out OMG AI PTF meetings
    • The White House Blueprint for an AI Bill of Rights
    • ... plus of course the NIST AI Risk Management Framework itself

    This is not for the faint of heart (78 pages, 2 appendices, 69 footnotes…) but seems to be a really important piece of work for people interested in AI ethics and responsible computing in general, and the NIST AI RMF in particular.



    ------------------------------
    Claude Baudoin
    cébé IT Knowledge Management
    Co-Chair, OMG Cloud Working Group (as well as the OMG AI Platform Task Force)
    https://www.omg.org/cloud
    ------------------------------