The bad news is that we forgot to turn the recording on at the beginning, so it is missing the meeting's first 20 minutes or so. However, most of the conversation should still be captured.
Original Message:
Sent: Dec 29, 2023 12:10:14 PM
From: Jackson Munuo
Subject: 2023-12-20 AI Tech & Risk Meeting Minutes
Thank you for sharing the meeting minutes. It is very helpful for those of us who could not participate. Is there any chance the meeting was recorded and the video/audio can be shared too?
Thanks,
Jackson
------------------------------
Jackson Munuo
VP
CNA
Original Message:
Sent: Dec 26, 2023 08:46:14 AM
From: Xin Ai
Subject: 2023-12-20 AI Tech & Risk Meeting Minutes
A great start of the AI risks... Love to contribute as well. Thanks!
------------------------------
Xin Ai
Ally Financial Inc
Ally Financial Inc
Original Message:
Sent: Dec 22, 2023 10:53:58 AM
From: Josh Buker
Subject: 2023-12-20 AI Tech & Risk Meeting Minutes
AI Tech & Risk Meeting Minutes
Dec 20, 2023
Meeting Summary
During the meeting, the participants discussed the news about a Chevy dealer selling a car for $1, expressing concerns about reputational risk and potential bad PR. They also introduced the elected co-chairs for the AI Technology & Risk Working Group and discussed their roles and responsibilities. The ongoing work on risk identification for AI was discussed, including the collaboration between different working groups. The MECE model was explained, with a focus on threats and vulnerabilities in AI models. Risks related to AI infrastructure and data infrastructure were also discussed, along with the need for a risk-centric approach. The participants discussed the scope of risk assessment, the identification of objects of attack, and the measurement of risk and impact level. Collaboration and next steps were decided, including the use of a working spreadsheet for collaboration and the need for human-in-the-loop review.
Next Steps
- Please review the shared spreadsheet:
- AI Risks Categories
- Is the spreadsheet too complex, too simplified, or just right?
- The spreadsheet has been locked to comment/suggestion mode so that we can track contributions properly.
- The next meeting will be on January 3, 2023, for those that can attend. We will be continuing the biweekly cadence, with the next meeting after that being Jan 17, and so on.
Topics & Highlights
1. Discussion about the Chevy dealer selling a car for $1
The participants mentioned the news about a Chevy dealer supposedly selling a car for $1.
They discuss the reputational risk and potential bad PR if the news turns out to be true.
They mention the possibility of it being a marketing strategy and the uncertainty about its authenticity.
They discussed the high prices of cars in Canada and the current state of car sales.
2. Introduction of the elected co-chairs
The participants introduce the elected co-chairs for the AI Technology & Risk Working Group.
Mark Yanalitis
Satish Govindappa
Chris Kirschke
Mark Yanalitis provides a brief introduction about himself and his experience with the CSA.
Satish mentions his role as a chapter lead for CSA San Francisco and his involvement in working teams.
The speaker mentions their primary role in reviewing all kinds of AI-based applications.
The speaker expressed their interest in AI and thanked Sean for selecting them.
3. Introduction of the main research analyst
4. Discussion on risk identification for AI
5. Discussion on Risk Categories and Spreadsheet
Marco presented the worksheet based on Daniele's document and included categories such as lifecycle, asset, component, threat, and impact.
The Excel spreadsheet was still a work in progress and open for review.
Sunil shared his approach, which aimed to establish a top-down view of the problem.
6. MECE Model
The speaker mentions that the MECE model originated from Gary McGraw and his Berryville Institute, with some adaptations made. They explain that a MECE model ensures completeness in the model itself.
The speaker discusses the possibility of adding threats against availability to the MECE model, even though it was initially excluded due to the lack of interesting threats at the time.
The speaker explains the concept of objects of attack and the importance of considering them in the MECE model. They mention the need for a separate model if the objects of attack overlap too much with other aspects.
A question is raised about whether there are any objects of attack not incorporated when it comes to AI.
7. Vulnerabilities in AI Models
The speaker contrasts vulnerabilities in AI with traditional vulnerability discoveries, highlighting the difference in nature and impact. They mention that vulnerabilities in AI can be silent failures, biases, or statistical violations.
The speaker acknowledges the valid question about how vulnerabilities in AI intersect with the MECE model. They explain that the current discussion focuses on threats rather than vulnerabilities, and there might not be a well-defined MECE model for vulnerabilities as a whole.
A question is raised about how a MECE model would address silent failures and other vulnerabilities specific to AI models.
8. Definition of Risks
9. MECE Models
The speaker mentions that MECE models are hard to craft.
The speaker discusses fitting information into the model and making it mutually exclusive, comprehensively exhaustive.
10. AI Infrastructure and Data Infrastructure Risks
The speaker discusses risks related to foundational and data infrastructure, as well as AI-specific chipsets.
The speaker raises the question of whether the supply chain issues for AI-specific chipsets are distinct or general problems.
The discussion covers the impact of supply chain attacks and the significance of vulnerabilities in AI infrastructure.
The speaker emphasizes that the discussed risks are specific to AI infrastructure and chipsets.
11. Risk-centric vs Impact-centric approach
The group discusses the need to define categories of risks but not determine the intensity of the risk.
The discussion includes aligning the objects of a task with the components in another document and distinguishing between AI-specific applications and general applications.
12. Scope of Risk Assessment
13. Objects of Attack
14. Risk measurement and impact level
The group expressed concern about the impact level and proposed using a tuple to measure the risk and apply it to the contour.
The group discussed the idea of adding the persona and the finer-grained asset or component.
The group discussed the OWASP cyclone DX and the machine learning bill of material for capturing dependencies between assets and components.
The group proposed combining efforts with Sunil to filter threats based on the scope and suggested including the object of the task in the spreadsheet.
15. Collaboration and next steps
16. Working spreadsheet and human-in-the-loop review
------------------------------
Josh Buker
Research Analyst
Cloud Security Alliance
------------------------------