Artificial Intelligence

  • 1.  Securing ML models

    Posted Oct 24, 2022 08:33:00 AM
    Hello community! This is my first post as a member so it's nice to virtually meet you all.
    I wondered if anyone in the group would be willing to chat to me about securing ML models. We've done some initial research and developed tooling for safeguarding against a couple of types of attack but we're wondering which avenue to take the research (i.e. focus on training data sanitisation/detecting abnormal queries/'vulnerability' assessment of models) and looking for external viewpoints as to which would be of most value. 
    Please let me know if it is of interest.
    Thanks,
    Julia


    ------------------------------
    Julia Ward
    Director, CTO Office
    WithSecure
    ------------------------------


  • 2.  RE: Securing ML models

    Posted Jan 30, 2023 05:37:00 PM
    Sure

    ------------------------------
    MADHAV CHABLANI
    CIO
    CIO
    ------------------------------



  • 3.  RE: Securing ML models

    Posted Apr 05, 2023 08:23:00 AM

    Hi,

    What models do you need to secure / architectures, ETL, Data Lakes, K8, etc, 

    KR



    ------------------------------
    Emilio Mazzon CISM, CISA, CEng, CISM, CITP, CSA Board Director
    VP
    SNCL
    ------------------------------



  • 4.  RE: Securing ML models

    Posted Jun 05, 2023 12:59:00 PM

    This is a good topic. Now with LLMs being used in some businesses, it is time to revisit this topic. I am currently drafting security guidelines for designing and deploying applications using LLMs with the following top 10 points:

    1. Avoid PII and PHI Data: Ensure that prompts and training data used for LLMs do not contain Personally Identifiable Information (PII) or Protected Health Information (PHI) to prevent the risk of unauthorized disclosure.

    2. Access Control for Fine-Tuned Model and Vector Database: Implement strict access controls and authentication mechanisms to restrict access to the fine-tuned LLM model and any associated vector databases. Only authorized individuals should be granted access.

    3. Enforce API Access Control: Implement robust access control measures for LLM APIs, including authentication, authorization, and rate limiting, to prevent unauthorized access or abuse of the API endpoints.

    4. Log Access Details: Maintain comprehensive logs of API access to the LLM and vector database, capturing information such as the user, timestamp, and details of the accessed data. This information can be crucial for auditing, monitoring, and detecting potential security incidents.

    5. Clean Data to Reduce Bias: Thoroughly clean and preprocess training data to minimize bias and ensure fair and unbiased behavior of the LLM. Regularly review and update the training data to avoid perpetuating biases.

    6. Implement Guardrails: Integrate guardrails into the LLM's output validation process using open-source libraries like guardrails.ai. This helps verify the model's outputs for compliance, ethics, and other predefined criteria before the results are presented or acted upon.

    7. Conduct Internal and External Red Team Testing: Perform rigorous testing of the LLM both internally and through external red team engagements. This helps identify vulnerabilities, weaknesses, and potential attack vectors to address before deploying the model into production.

    8. Prevent Prompt Injection: Validate and sanitize user prompts to prevent prompt injection attacks, where malicious input is used to manipulate or exploit the LLM's behavior. Implement input validation techniques to ensure that user prompts meet specific criteria.

    9. Validate Chain of Inputs: When using AutoGPT or plug-in modules, validate and sanitize inputs at each step of the chain to ensure the integrity and security of the data. Avoid blindly trusting inputs from upstream sources without appropriate validation.

    10. Collaborate with InfoSec Team: Engage and collaborate with your information security (InfoSec) team throughout the development and deployment process. Involve them in security assessments, risk analysis, and compliance evaluations to address any potential security concerns or doubts
      Would be interested in your opinions. 



    ------------------------------
    Ken Huang
    CEO
    DistributedApps
    ------------------------------



  • 5.  RE: Securing ML models

    Posted Jun 14, 2023 06:49:00 AM

    Interesting. The Mitre ATLAS framework https://atlas.mitre.org/ (modeled after the Mitre AT&CK controls framework https://attack.mitre.org/) is a good survey of the Ai and ML attack surface. Consider checking out a Microsoft viewpoint (a deliverable of the AETHER Engineering Practices for AI Working Group) on threat modeling in the AI and ML space. https://learn.microsoft.com/en-us/security/engineering/threat-modeling-aiml. 



    ------------------------------
    Mark Yanalitis
    ------------------------------