Artificial Intelligence

NIST Draft -Taxonomy of AI Risk

  • 1.  NIST Draft -Taxonomy of AI Risk

    Posted Nov 03, 2021 04:10:00 AM
    Edited by Michael Roza Nov 03, 2021 04:11:51 AM
    Hi All,

    @James Angle

    The National Institute of Standards and Technology (NIST) aims to cultivate trust in the design, development, use, and governance of Artificial Intelligence (AI) technologies and systems in ways that enhance economic security and improve quality of life.

    NIST focuses on improving measurement science, technology, standards, and related tools – including evaluation and data. This white paper focuses on the preconditions of trust in AI and aims to further engage the AI community in a collaborative process to encourage consensus regarding terminology related to risk so that these types of risk may be identified and managed.

    ​The paper starts by identifying several relevant policy directives that identify sources or types of risk across the AI lifecycle. For example, the Organisation for Economic Co-operation and Development (OECD) AI principles1 specify that AI needs to have:
    • Traceability to human values such as rule of law, human rights, democratic values, and diversity, and ensuring fairness and justice
    • Transparency and responsible disclosure so people can understand and challenge AI-based outcomes • Robustness, security, and safety, through the AI lifecycle to manage risks
    • Accountability in line with these principles

    Similarly, the European Union Digital Strategyʼs Ethics Guidelines for Trustworthy AI2 identifies seven key principles of trustworthy AI:
    • Human agency and oversight
    • Technical robustness and safety
    • Privacy and data governance
    • Transparency
    • Diversity, non-discrimination, and fairness
    • Environmental and societal well-being
    • Accountability

    Finally, US Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government3 specifies that AI should be:
    • Lawful and respectful of our Nationʼs values
    • Purposeful and performance-driven...using AI, where the benefits of doing so significantly outweigh the risks, and the risks can be assessed and managed
    • Safe, secure, and resilient
    • Understandable…by subject matter experts, users, and others, as appropriate
    • Responsible and traceable
    • Regularly monitored
    • Transparent
    • Accountable

    https://www.nist.gov/document/draft-taxonomy-ai-risk-october-15-2021

    ------------------------------
    Michael Roza CPA, CISA, CIA, MBA, Exec MBA
    ------------------------------