Data Security

 View Only

Data Security Working Group Meeting - 5/9/24

  • 1.  Data Security Working Group Meeting - 5/9/24

    Posted May 14, 2024 11:41:00 AM
    Edited by Alex Kaluza May 14, 2024 12:33:59 PM

    Data Security Working Group Meeting - 5/9/24

    Data security within an AI Environment, Development - 6/15

    Meeting Summary
    The team discussed the intersections between data security, AI systems, and compliance, focusing on developing a data-centric security approach. They explored techniques for mitigating bias in AI models and the challenges of processing large datasets for AI applications.

    Adjusting Meeting Time and AI Initiatives
    The team agreed to adjust their meeting time to 2 PM to accommodate members on both coasts and to focus their discussions on data security. They discussed their experiences and involvement in the CSA sessions and the RSA Conference, highlighting the quality of the sessions and their own contributions. The team also discussed the progress of various AI-related initiatives and upcoming events, emphasizing the importance of their team's involvement, particularly in governance and compliance. Gopi confirmed that he would share more details about the planned sessions.

    Data Security, Privacy, and AI Intersections
    The team discussed the intersections between data security, privacy, and artificial intelligence (AI). Lazarus and Gopi agreed to begin creating content focused on privacy and its impact on AI, with the potential for future presentation sessions. Rocco pointed out that while there was some overlap between data security and privacy in AI, there was also significant unique content to be covered. Additionally, Rocco brought up the development of an AI policy within his company in response to customer and regulatory requirements, and suggested that others in similar positions could benefit from sharing their policies for comparison and collaboration.

    AI Use Policies and Conference Objectives
    Rocco, Lazarus, and Alex discussed the organization's policies and guidelines for AI use, emphasizing on the importance of security, privacy, and compliance with regulatory bodies such as HIPAA. They agreed on the need to minimize data usage and anonymize data wherever possible. Lazarus introduced an internal document outlining their conference objectives, which was aligned with Dave's security and privacy efforts and considered as a reference. The team agreed to review their current practices to ensure they are in line with their established goals.

    AI System Development and Bias Mitigation
    Alex, Lazarus, and Rocco discussed the development of an AI system and its data security. They created an outline for the system, which was adapted into a CSA template format by Alex. The team also deliberated on the enforcement of rules and potential challenges in identifying bias within AI systems, with Rocco suggesting a peer-based system for bias detection. The team agreed on a new method for developing a company-specific paper, with Rocco and Onyeka volunteering to write the introduction and content sections respectively. The team aimed to focus on content development for the month, with a plan to make significant progress by the end of June, and was open to other ideas, including a possible restructuring of the project.

    Data Security in AI Environment Strategy
    Lazarus, Alex, and Rocco discussed the approach to data security within an AI environment. They agreed that a data-centric approach is necessary, focusing on the security of the data rather than the traditional border-focused approach. They also discussed the necessity of a paper to outline this approach, with Lazarus suggesting it should include an introduction, understanding AI and intrinsic demand for data security, techniques, and a section on the approach to data security with AI. Rocco emphasized the importance of a strategy overview and the need to focus primarily on a data-centric approach.

    Document Formatting and Governance Discussion
    Alex addressed some technical issues with the document's formatting and sought collaboration from the team, particularly Rocco and Lazarus. The team discussed potential main categories and sections for the document, emphasizing the need to avoid duplication in the 'Governance and Compliance' AI paper. The discussion focused on data security, compliance, and privacy, with Lazarus stressing the importance of data protection as the key to ensuring privacy and highlighting the potential conflicts between security and privacy/compliance objectives.

    AI and RMF Security and Privacy Discussion
    Lazarus, Rocco, Alex, and Onyeka discussed the security and privacy domains of AI and RMF. They emphasized the importance of overlapping security and privacy domains, referencing existing work to avoid duplication, and including corporate policy around AI. The team also deliberated on the need for technical measures to ensure data security, and the importance of an exit strategy and data integrity. The discussion focused on determining the level of detail in a paper, considering the audience and whether to delve into technical controls or focus on high-level policies. The team seemed to reach a consensus on focusing on current gaps rather than delving into technical details.

    Bias Mitigation in Security Controls
    Rocco and Lazarus discussed the issue of bias mitigation in security controls, particularly in relation to AI. They identified a significant gap in security due to the lack of a technical control to determine bias. Lazarus suggested that the efficiency of AI models could be evaluated through testing data. They also discussed the different types of bias that can arise in data analysis, such as statistical and ethical bias, and the stages of data processing where these biases might occur. The discussion concluded with Alex acknowledging the complexity of these issues and the need to be aware of both under- and over- biasing.

    Discussing Data Challenges in AI
    Alex and Rocco discussed the potential issues and ramifications of processing large amounts of data, particularly in the context of AI. They highlighted the challenges in distinguishing between accurate and false data, especially when using anonymized or pseudo-anonymized data. They also discussed recent examples of AI mishaps and the potential for malicious data poisoning. The conversation concluded with Alex appreciating the insights gained and looking forward to further exploring these topics in a paper.

    Alex Kaluza
    Research Analyst
    Cloud Security Alliance