Data Security Working Group Meeting - 2/27/25
Data Security Publications:
Publications in Development:
Proposed (2025):
Meeting Summary:
The meeting covered various aspects of AI technology, including its potential applications, risks, and ethical considerations. The team discussed their ongoing project, focusing on data security within AI environments and the importance of prompt security and guardrails. They also explored the latest advancements in AI tools and their potential impact on different industries, while addressing concerns about AI ethics, security, and legality.
Fin Cloud Summit Follow-Up Discussion:
Combining Ideas for Data Security Paper:
Project Progress and Future Direction:
- In the meeting, Alex, Rocco, Vashti, and Daniel discussed the progress and future direction of a project. Rocco expressed interest in updating the Privacy Enhancing Technologies section of the project, which he had initially contributed to. Alex suggested that others could also contribute to different sections of the project. The team also discussed the need to clean up and distill the project's content, with Rocco noting that the current document was overwhelming. Vashti expressed her intention to start reviewing the project in the afternoon.
Prompt Security and Data Labeling:
- In the meeting, Daniel, Vashti, Rocco, and Alex discussed the importance of prompt security, particularly in the context of limiting users from prompting with sensitive data. Daniel suggested putting language around input restrictions and prompt security, while Rocco emphasized the significance of these issues. They also touched on the topic of data label security and the concept of data loss prevention (DLP). Daniel introduced the idea of nuance in prompt security, such as limiting responses for certain users. The group agreed that these issues were overlooked and needed more attention. Alex proposed that Daniel could start working on language around prompt security. The conversation ended with the understanding that the current papers on the topic did not delve deeply into prompts or output.
Organizational Responsibilities and AI Interactions:
- Daniel proposed changes to the paper, focusing on the depth of organizational responsibilities. Alex and Rocco discussed where to include these changes, suggesting it should be under planning, input variables, or data inputs. They discussed the risk of input challenges and the importance of data standards for AI-to-AI interactions. They also discussed the potential of AI for form recognition and the need for a standard template. The conversation ended with the potential for new and better approaches.
Human Involvement in AI Systems:
- The team discussed the importance of human involvement in AI systems, emphasizing the need for directing AI models to understand what is important to users. They also discussed the risks associated with self-hosted versus publicly hosted AI, and the need for guardrails in AI systems to prevent misuse. The conversation shifted to the topic of structured versus unstructured data, with a focus on the need to structure data for AI systems. The team also touched on the topic of malicious prompting, discussing how to trick AI systems and the potential for prompt injection. The conversation ended with a discussion on the importance of creativity in AI development.
Guardrails, Shadow AI, and Standardization:
- The team discussed the concept of guardrails in AI and how it applies to various entities. They discussed the idea of guardrails circumvention as a malicious act, but also highlighted that it could be for the good or for the benefit of security for a company. They also talked about the issue of shadow AI usage, where employees use unauthorized AI tools, and how this can be a problem. They proposed the idea of standardizing AI tools within an organization to prevent shadow AI usage. They also mentioned the rapid evolution of AI tools, with new models being released frequently, making it challenging to maintain consistency.
AI Tools Capabilities and Integration Discussion:
- Alex discussed the functionalities and potential of AI systems, particularly latest versions of Claude and Chat GPT. He highlighted the recent advancements in AI tools, such as the Model Context Protocol (MCP) and community servers, which have significantly improved their capabilities. Alex also mentioned the integration of these tools with other big players like AWS and Deep Seek. He shared his experience of using these tools for research and hinted at the potential of combining different aspects of these tools for innovative solutions. Rocco expressed his company's cautious approach to AI due to the sensitive nature of their financial data. Daniel shared his positive experience with Claude, particularly its ability to generate accurate HTML code and its potential for targeted use cases. The team ended the conversation with a discussion on the potential use cases of AI systems in their respective work environments.
Exploring AI Risks and Challenges
- In the meeting, Daniel, Alex, and Rocco discussed various topics related to AI, including AI ethics, AI honesty, AI data set poisoning, AI security, and AI legality. They discussed the potential risks and challenges associated with AI, such as data set poisoning and AI accessing copyrighted material. They also touched on the topic of AI epistemology, which refers to the study of truth and what is actually true or not. The team agreed to explore these topics further in their paper, with a focus on interesting and real-world issues. They also discussed the potential for AI to replace human jobs and the need for society to prepare for this change. The conversation ended with the team expressing their appreciation for the discussion and looking forward to further work on the paper.
------------------------------
Alex Kaluza
Research Analyst
Cloud Security Alliance
------------------------------