The Inner Circle

  • 1.  NIST Draft -Taxonomy of AI Risk

    Posted 23 days ago
    Hi All,

    @James Angle

    The National Institute of Standards and Technology (NIST) aims to cultivate trust in the design, development, use, and governance of Artificial Intelligence (AI) technologies and systems in ways that enhance economic security and improve quality of life.

    NIST focuses on improving measurement science, technology, standards, and related tools – including evaluation and data. This white paper focuses on the preconditions of trust in AI and aims to further engage the AI community in a collaborative process to encourage consensus regarding terminology related to risk so that these types of risk may be identified and managed.

    ​The paper starts by identifying several relevant policy directives that identify sources or types of risk across the AI lifecycle. For example, the Organisation for Economic Co-operation and Development (OECD) AI principles1 specify that AI needs to have:
    • Traceability to human values such as rule of law, human rights, democratic values, and diversity, and ensuring fairness and justice
    • Transparency and responsible disclosure so people can understand and challenge AI-based outcomes • Robustness, security, and safety, through the AI lifecycle to manage risks
    • Accountability in line with these principles

    Similarly, the European Union Digital Strategyʼs Ethics Guidelines for Trustworthy AI2 identifies seven key principles of trustworthy AI:
    • Human agency and oversight
    • Technical robustness and safety
    • Privacy and data governance
    • Transparency
    • Diversity, non-discrimination, and fairness
    • Environmental and societal well-being
    • Accountability

    Finally, US Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government3 specifies that AI should be:
    • Lawful and respectful of our Nationʼs values
    • Purposeful and performance-driven...using AI, where the benefits of doing so significantly outweigh the risks, and the risks can be assessed and managed
    • Safe, secure, and resilient
    • Understandable…by subject matter experts, users, and others, as appropriate
    • Responsible and traceable
    • Regularly monitored
    • Transparent
    • Accountable

    Michael Roza CPA, CISA, CIA, MBA, Exec MBA

  • 2.  RE: NIST Draft -Taxonomy of AI Risk

    Posted 21 days ago
    Here are related links on the topic for easier reference and in case you're interested in reading more... (like I was) :)

    J Whorley
    Cybersecurity Graduate Student
    New York University