The Inner Circle

 View Only
  • 1.  Draft NISTIR 8312 Four Principles of Explainable Artificial Intelligence

    Posted Sep 27, 2020 11:00:00 AM
      |   view attached
    Hi All,

    In an effort to help develop trustworthy AI systems, the National Institute of Standards and Technology (NIST) requests feedback on a draft publication report on AI explainability. Relevant stakeholders to respond by the October 15, 2020 deadline.

    We encourage you to submit comments using the form provided on this page. Please submit comments by email to [email protected]


    ------------------------------
    Michael Roza CPA, CISA, CIA, MBA, Exec MBA
    ------------------------------

    Attachment(s)



  • 2.  RE: Draft NISTIR 8312 Four Principles of Explainable Artificial Intelligence

    Posted Sep 28, 2020 07:50:00 AM
    So to quote part of the report:

    • 165 Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.
    • 166 Meaningful: Systems provide explanations that are understandable to individual users.
    • 167 Explanation Accuracy: The explanation correctly reflects the system's process for generating the output.
    • 169 Knowledge Limits: The system only operates under conditions for which it was designed
    • 170 or when the system reaches a sufficient confidence in its output. 
    One concern I have is that sometimes individual decisions are not a specific aspect of the system, e.g. a system that assesses speed camera data and issues a traffic fine or not is consuming a discrete event and providing an output ("car was going X, so speed fine is Y"). What about a system that is intended to provide a high level "meta" statistical model based on a wide variety of inputs that may or may not be correct or even available and as such doesn't focus on specific decisions but more to provide steering guidance at a high level? I can't help but think about:

    https://www.youtube.com/watch?v=owI7DOeO_yg

    I also feel like explainable is a good start but I worry about a lack of actionability, e.g. so we can explain the AI and let's assume we've identified a specific problem with it, is there some way to then correct that and ensure the fix takes as it were.

    ------------------------------
    Kurt Seifried
    Chief Blockchain Officer and Director of Special Projects
    Cloud Security Alliance
    [email protected]
    ------------------------------



  • 3.  RE: Draft NISTIR 8312 Four Principles of Explainable Artificial Intelligence

    Posted Sep 29, 2020 08:51:00 PM
    This is not a direct response to the question that you asked, but it is related to the NIST criteria.

    I would want to be able to "diff" the explanations between two executions of a neural network. Let me give you an example:
    1. I run a loan application analysis program on certain inputs, and it comes out with "Deny"
    2. I capture the explanation -- whether I needed to say in advance that I wanted the algorithm to produce an explanation together with the recommendation, or had to ask "why?" after getting the answer.
    3. I re-run the program after changing one input, say the ZIP code of the applicant. This time it comes out with "Approve"
    4. I capture the explanation for this second run.
    Now what? How do I pinpoint, assuming the explanation is fairly long and complex, why the two results were different? In a procedural algorithm, I would talk about the executions "diverging": there had to be a point where a test was performed in an IF or WHILE statement, different branches were taken, and I can then trace the pedigree of the values that caused these separate paths to be taken. With a neural net, it's usually going to be harder to pinpoint what caused the results to be so different. If I'm just presented with two listings of the values of the internal layers of the network (possibly hundreds of nodes), how do I perform this forensic investigation -- let's say to discover that the algorithm uses statistics about the ZIP codes that are a covert form of race-based preference for people buying property in a predominantly white neighborhood vs. one where Black of Hispanic people mostly live?

    ------------------------------
    Claude Baudoin
    Owner & Principal Consultant
    cébé IT & Knowledge Management
    ------------------------------