Artificial Intelligence

Expand all | Collapse all

My highlights for CSA AI Summit: 5-person panel was the showstopper of the day

  • 1.  My highlights for CSA AI Summit: 5-person panel was the showstopper of the day

    Posted Jan 17, 2024 01:07:00 PM

    My summaries (Not an official transcript, my impressions, recollections, and take-aways). Key panel observations: 

    1. AI ML models exist on a spectrum where by in the current circumstance models that operate on a "prompt and response" (e.g., how many household cat breeds exists in the UK?)  require a different level of level of scrutiny than those that take in a data prompt and return a decision (e.g., Based on the facts in this criminal case, what are the minimum sentencing recommendations?").  Clearly, models exist on a sliding scale based upon the use case, industry, level of expected regulation, and the required explainability regime.
    2.  The current state of internal auditing and its bias towards technical auditing is not sufficient to scale with the rate of change in model development both in the cases of Frontier models (e.g., those that exceed the capabilities of existing large models ) and Foundational Models. (e.g., GPT). In 2024, the panel agreed that there will be a huge leap in large model performance in 6 months and then potentially doubling capability in 12 months time. Beyond the existing methods and models associated with supervised, unsupervised, and reinforcement learning, there will new methods and model in the future. With respect auditing and compliance assessment, there is a notable need for a level of empirical scientific sophistication to asses bias and transparency; this is not native to technical auditing -  this is an independent team that has ML engineers and data scientists who can assess with statistical confidence that a model's inputs are correlate with its outputs. CUrrent state, there are less than 5 consultancies who can perform at that level of empiricism to assess responsible scaling of AI usage.  Many companies will be force to develop their own teams for said purpose. 
    3. Privacy is a going concern with all models, notable among the frontier models. Rather than think of the scope in terms of a monolithic view of privacy, think of the problem space as a need for "differential privacy" where a "trust management framework" honors and ensures personal privacy guarantees and obligations according to use case and risk profile.
    4. Frontier and Foundation models will force niche and proprietary risk assessment and control regimes that varying risk profile by AI use case, industry vertical, and a companies risk appetite. Attestable and verifiable data usage and manipulation methods are in need to support explain-ability regimes governing AI workflow. Due to the inability of industry standard compliance regimes to scale with the pace of change, niche risk frameworks and regimes are a foregone conclusion. Technical auditing is not enough to ensure that data minimization and ML ops meet compliance obligations. Frameworks need to scale with the nature of the domain assessed. Within the field, there will be AI IT Controls that change very little from engagement to engagement, while others will change rapidly. AI Risk assessment frameworks need to scale with this level of dynamism.
    5. AI will lead to new threat models and methods. Traditional threat modeling does not take into account introspective and interrogative techniques to assess a model's resiliency and resistance to statistical (and other forms) techniques. Traditional Tech-driven penetration testing is not properly oriented from a bench strength point of view, and different skills need to come into the specialty. 
    6. Forward looking statements: 
      1. Do not be hyper-focused on what is right in front of you. Pay attention to the long-view.  In a manner of speaking, the object in the mirror is closer than it appears. 
      2. The degree of risk focus is contingent upon what use case, industry, and specifically what business problem being solved. 
      3. With respect to the large providers in the Frontier Model workspace, the systemic goal is democratization of AI capability. 
      4. Privacy enhanced technology is a necessity in a future where personally generated data is use by algorithms to provide feedback to the consumer. Privacy covenants must delineating usage between AI usage and Data retention for personal usage.  It is conceivable, and already a means of operation for some Frontier models, to have models (sentinels of sorts) inspecting the data input/model output consistency of even larger models. 



    ------------------------------
    Mark Yanalitis
    ------------------------------



  • 2.  RE: My highlights for CSA AI Summit: 5-person panel was the showstopper of the day

    Posted Jan 24, 2024 12:43:00 PM

    I appreciate the summary since I could not attend due to my timezone.  CSA events and workgroups are not kind to those of us outside the US.



    ------------------------------
    Sai Honig, CCSP, CISSP


    Wellington, New Zealand
    ------------------------------