The Inner Circle

 View Only
Expand all | Collapse all

ChatGPT Research

  • 1.  ChatGPT Research

    Posted Jan 24, 2023 07:48:00 AM
    Hi All,

    I would appreciate the community helping us to think through what CSA's approach to research should be in light of the quick uptake of ChatGPT. I know ChatGPT is not unique in the world, but it certainly has reached mainstream and caught the attention of some of the smartest people I follow in our industry. I believe the attention it is currently getting is going to help us build better AI/ML security best practices and I think CSA should put together a white paper in short order as part of a longer term research effort. It seems to me the four dimensions are: 1) How malicious actors can use it to create new and improved cyberattacks, 2) How defenders can use it to improve cybersecurity programs, 3) How it can be directly attacked to produce incorrect or otherwise bad results and finally, 4) How to enable the business to use it securely.

    I appreciate any input you have on how I am framing this and any anecdotes you want to share!


    ------------------------------
    Jim Reavis CCSK
    Cloud Security Alliance
    Bellingham WA
    ------------------------------


  • 2.  RE: ChatGPT Research

    Posted Jan 25, 2023 07:42:00 AM
    Hello Jim,

    happy to help you in this. I am one of a few experts in Microsoft working between AI/ML and Security, including the Responsible AI (term for explainability under ethic terms). Quite engaged in our Zero Trust working groups here together with your collogue Eric Johnson.


    Kindest regards,
    Lars

    ------------------------------
    Lars Ruddigkeit
    Account Technical Strategist Swiss FedGov
    Microsoft Switzerland
    ------------------------------



  • 3.  RE: ChatGPT Research

    Posted Jan 25, 2023 08:02:00 AM
    Greetings.  Since the CBS Sunday Morning story there many are concerned about originality.  My take is that academic writing requires the identification of sources through references and in-text citations.  If students use this feature (APP) I would require the source.  ChatGPT would not be an adequate source. 

    If Microsoft adds ChatGPT, how will originality be assessed?

    ------------------------------
    [Ron] [Martin] [Ph.D.]
    [Professor of Practice]
    [Capitol Technology University
    [[email protected]]
    ------------------------------



  • 4.  RE: ChatGPT Research

    Posted Jan 25, 2023 09:55:00 AM
    A lot to ponder here Ron. Early users tell me that they are using ChatGPT to create some inspiration or a template for original works they must create. I would think a researcher would need to manually add citations for any factual statement a report from ChatGPT made.

    ------------------------------
    Jim Reavis CCSK
    Cloud Security Alliance
    Bellingham WA
    ------------------------------



  • 5.  RE: ChatGPT Research

    Posted Jan 25, 2023 12:51:00 PM
    Yes, that is true.  The problem is how to give credit to where ChaGPT derived the information.

    --
    Dr. Ron Martin, CPP





  • 6.  RE: ChatGPT Research

    Posted Jan 25, 2023 09:32:00 AM
    Thanks Lars, would love your help. Do you think I framed the paper in a reasonable way?

    ------------------------------
    Jim Reavis CCSK
    Cloud Security Alliance
    Bellingham WA
    ------------------------------



  • 7.  RE: ChatGPT Research

    Posted Jan 25, 2023 08:47:00 AM
    I mostly agree with the framing but I'd suggest considering one other dimension that I've recently been providing guidance to our business on. It might be thought of as a counterpoint to your bullet 4) or as a 5) but it's when is it inappropriate to use ChatGPT. As it is evolving as a potential tool in the zeitgeist, people who don't necessarily understand the vagaries are finding it can be useful to do complex things. One life hack we came across for a sales team recently was to upload all of your notes (often subject to NDA) from a sales call to chatGPT after the meeting to quickly summarize and create a follow-up email to send out right after you're done. Great time saving tip and completely inappropriate use of company restricted and confidential information. And this is sales, I hate to think of what the finance, HR, legal or engineering life hacks people will come up with will be. Of course we might move soon to a world where this is a adequately licensed tool that has proper data governance and can deal reasonably with data protection that we then provide to employees to use in exactly this fashion. But I'm not even sure that is a complete option yet (though to be honest I haven't explored what there commercial licensing looks like yet).

    In the more general sense, it is where to balance the risk of sharing sensitive information, and how will that information be protected against the rewards from the new scenarios that it can unlock. And how much can OpenAI, Azure and others be pushed to prioritize data governance for the data sets that are necessarily shared to enable these scenarios.

    It's not the first time that a new technology that offers a B2C option has been used inappropriately as part of business but it's ability to use data to create truly compelling results increases the risk that folks are going to use it without thinking clearly about what they are doing.

    ------------------------------
    Peter Oehlert
    Chief Security Officer
    Highspot
    ------------------------------



  • 8.  RE: ChatGPT Research

    Posted Jan 25, 2023 09:48:00 AM
    This is all fair Peter, we need to think about how to incorporate this and have guidance about using ChatGPT securely that articulates when it is inappropriate to use at all. ChatGPT does not overrule compliance mandates to protect information and it would clearly be wrong to train ML systems with PII for example.

    ------------------------------
    Jim Reavis CCSK
    Cloud Security Alliance
    Bellingham WA
    ------------------------------



  • 9.  RE: ChatGPT Research

    Posted Jan 26, 2023 09:16:00 AM
    Hi all
    • The issue is not so much about Chat GPT, but with AI tools generically.  And AI is nowhere near as powerful as ML tools
    • Users taking company data and sending it to a third party provider, SaaS provider or any other external entity need to be governed by company rules that control these actions.  Its clearly not that sensible to take company data and send it anywhere in the web, not to mention all the  laws and regulations that relate here 
    • There is nothing in IT that cant be properly controlled and used in an appropriate and safe manner, if there is the will to do so!  There is no social media platform that cant eliminate any combination of hate speech or anything else if they want to, instantly, and automatically 
    • It needs to be up to the AI platforms to regulate users, follow ethical policies & remove those users accounts if they violate those policies? 
    • Or if this doesn't work out it will end up with legislators
    KR



    ------------------------------
    Emilio Mazzon CISM, CISA, CEng, CISM, CITP, CSA Board Director
    VP
    SNCL
    ------------------------------



  • 10.  RE: ChatGPT Research

    Posted Jan 26, 2023 11:23:00 AM
      |   view attached
    The National Institute of Standards and Technology just released the attached publication.

    It provides a general view of AI Risks
    --
    Dr. Ron Martin, CPP



    Attachment(s)

    pdf
    NIST.AI.100-1.pdf   1.85 MB 1 version


  • 11.  RE: ChatGPT Research

    Posted Jan 26, 2023 11:41:00 AM
    This is great, we should be able to invite NIST to participate in our research on this topic.--
    Jim Reavis
    [email protected]
    CEO, Cloud Security Alliance
    +1.360.820.2545



    This e-mail account is used only for work-related purposes; it is not guaranteed that any correspondence sent to this address will be read by the addressee only, as it may be necessary, under certain circumstances, for third parties appointed by the Cloud Security Alliance to access this e-mail account. Please do not send any messages of a personal nature to this address.





  • 12.  RE: ChatGPT Research

    Posted Jan 25, 2023 12:54:00 PM


    Please review 

    ChatGPT: Grading artificial intelligence's writing - CBS News

    OpenAI's artificial intelligence writing program ChatGPT will, with a few prompts, compose poetry, prose, song lyrics, essays, even news articles. And that has ethicists and educators worried ...

    www.cbsnews.com


    --
    Dr. Ron Martin, CPP





  • 13.  RE: ChatGPT Research

    Posted Jan 25, 2023 01:26:00 PM





  • 14.  RE: ChatGPT Research

    Posted Jan 25, 2023 01:05:00 PM





  • 15.  RE: ChatGPT Research

    Posted Jan 25, 2023 10:17:00 AM
    I would consider this tweet as a general direction for points 2 and/or 4:

    https://twitter.com/pdhsu/status/1615059981044441088?t=yYBjph-lyVW7kTJd10-0AA&s=19

    ------------------------------
    Zbyszek K-M
    cybersecurity expert
    IBM BTO
    ------------------------------



  • 16.  RE: ChatGPT Research

    Posted Jan 25, 2023 12:14:00 PM
    Edited by Andreas Baeuml Jan 25, 2023 12:14:46 PM
    Hello Jim, 

    As far as I can tell, a lot of people use it with highly sensitive data without knowing the consequences. So kind of like what Peter mentioned in his reply. In the short term, that's probably the most dangerous thing for companies. In the long term, it's probably a combination of 1, 2 and 4 of your points (even though I think 3 also is a very interesting topic). 

    Also: This report by CyberArk sums up pretty precisely what to expect for your point 1. We can expect to see more sophisticated polymorphic malware that may be able to hide from antivirus systems. https://www.cyberark.com/resources/threat-research-blog/chatting-our-way-into-creating-a-polymorphic-malware

    ------------------------------
    Andreas Baeuml M.Sc. IT-Security, CCSK, AWS CCP
    ------------------------------



  • 17.  RE: ChatGPT Research

    Posted Jan 25, 2023 02:29:00 PM
    Hi Jim 
     I believe the attention it is currently getting is going to help us build better AI/ML security best practices and I think CSA should put together a white paper in short order as part of a longer term research effort. It seems to me the four dimensions are: 1) How malicious actors can use it to create new and improved cyberattacks, 2) How defenders can use it to improve cybersecurity programs, 3) How it can be directly attacked to produce incorrect or otherwise bad results and finally, 4) How to enable the business to use it securely.
    Your Suggestions is very perfect ,also to apply regulations. 
    I think in two dimensions, one to contact with chat GPT creater ,to apply your Goals ,Second In Acdemic Sectors let using of Chat GPT Continue, with High Observistion From Faculty .Regarding The Quality Of Work From Student .

    Thank You For Highlighting 



    ------------------------------
    Elrasheid Mohmed Ahmed Adam Elrayah
    Tech Expert University
    Tech Expert University
    ------------------------------



  • 18.  RE: ChatGPT Research

    Posted Jan 25, 2023 02:30:00 PM
      |   view attached
    Of course you knew I was going to use ChatGPT to write a draft version of the report.

    ------------------------------
    Jim Reavis CCSK
    Cloud Security Alliance
    Bellingham WA
    ------------------------------

    Attachment(s)

    pdf
    Cybersecurity-ChatGPT.pdf   183 KB 1 version


  • 19.  RE: ChatGPT Research

    Posted Jan 25, 2023 11:49:00 PM
    Regarding 1 & 2, I believe this to be 2 sides of the same coin.
    • For 1, as described by Andreas: You can expect much better malware free of typos and bad grammer
    • For 2, it is the same: ChatGTP has a "human" style of writting but you should be able to detect the style itself. Therefore, ChatGTP can be used to create training examples of new CyberAttack styles.
    The questions is who is faster. I bet on 1, the attacker.
    • For 3, from a design perspective: Overall, models like ChatGTP are well protected because the frameworks of Big Tech companies are expecting attacks on this level. The genreal company can use ChatGTP safely because it is in the end for them just a secured end point. This does not need to be true for AI models hosted by other companies. I am are using MLOps but have never heard of "real ML security", which protects against
    1. Data poisoning attacks
    2. Adversarial attacks
    3. Evasion techniques
    4. "Oracle" attacks
    5. Exploitation of missing model input validation
    6. Model extraction attack
    The challenge with this topics is its own nature. These discussions happen in deep AI knowledge groups and normally not in Cyber Defense groups. The attack/protect surface is completely different.
    • For 4, to enable business: Here is a lot understanding missing. Many people do believe that they can adopt ChatGTP to their "data". It is not the purpose of ChatGTP. ChatGTP was to demonstrate the power of these large models. Business can use this "type" of models to retrain them on their data. 


    ------------------------------
    Lars Ruddigkeit
    Account Technical Strategist Swiss FedGov
    Microsoft Switzerland
    ------------------------------



  • 20.  RE: ChatGPT Research

    Posted Jan 26, 2023 12:02:00 AM
    Open AI Service models aka the definition of "type" from my post 16:
    GTP-3 family
    1. Ada: Simple classification, parsing and formatting text
    2. Babbage: Semantic search ranking, moderately complex classification
    3. Curie: Answering questions, complex, nuanced classification
    4. Davinci: Summarizing for specific audience, generative creative content
    Codex family (Codex is based on ChatGTP but focuses on source code creation/completion. I would call it a domain version of GTP-3)
    1. Cushman-codex: smaller version of Davinic but faster and lower quality results
    2. Davinici-codex: Main model for codex
    Why the difference between these two? Cushman will be used for autocomplete scenarios (click button and expect direct completion), while Davinic is required to create not a line but a small peace of software.

    I hope this helps to explain why the about ChatGTP mixes many topics at the moment in the news because people are not aware of the differences in the underlying AI model architectures.

    Back to 1 & 2, Codex can improve our overall security of software but Cyber criminals will try to provide code ideas (snippets) to be incorporated as backdoors into Codex.

    ------------------------------
    Lars Ruddigkeit
    Account Technical Strategist Swiss FedGov
    Microsoft Switzerland
    ------------------------------



  • 21.  RE: ChatGPT Research

    Posted Jan 26, 2023 11:26:00 AM
      |   view attached



    Attachment(s)

    pdf
    NIST.AI.100-1.pdf   1.85 MB 1 version


  • 22.  RE: ChatGPT Research

    Posted Jan 26, 2023 02:40:00 AM
      |   view attached
    Hi Jim,
    Attached a paper a colleague of mine published recently on this topic.
    Best regards,
    Julia

    JULIA WARD

    Principal Client & Markets Liaison CTO Office

    WithSecureTM

    (+44) 07920 252 349 | [email protected]



    ------------------------------
    Julia Ward
    Director, CTO Office
    WithSecure
    ------------------------------



  • 23.  RE: ChatGPT Research

    Posted Jan 26, 2023 11:28:00 AM
      |   view attached



    Attachment(s)

    pdf
    NIST.AI.100-1.pdf   1.85 MB 1 version


  • 24.  RE: ChatGPT Research

    Posted Jan 27, 2023 04:49:00 AM
    This is just a repeat of my other message to be sure the participants in this thread are aware of it.

    Hi All,

    @Jim Reavis

    NIST AI Risk Management Framework Aims to Improve Trustworthiness

    NIST today released its Artificial Intelligence Risk Management Framework (AI RMF 1.0), a guidance document for voluntary use by organizations designing, developing, deploying, or using AI systems to help manage the risks of AI technologies. The Framework seeks to cultivate trust in AI technologies and promote AI innovation while mitigating risk. The AI RMF follows a direction from Congress for NIST to develop the framework and was produced in close collaboration with the private and public sectors over the past 18 months.

    AI RMF 1.0 was released at a live-streamed event today with Deputy Secretary of Commerce Don Graves, Under Secretary for Technology and Standards and NIST Director Laurie Locascio, Principal Deputy Director for Science and Society in the White House Office of Science and Technology Policy Alondra Nelson, House Science, Space, and Technology Chairman Frank Lucas and Ranking Member Zoe Lofgren, and panelists representing businesses and civil society. A recording of the event is available here (https://www.nist.gov/news-events/events/2023/01/nist-ai-risk-management-framework-ai-rmf-10-launch).
    NIST also today released, for public comment, a companion voluntary AI RMF Playbook, which suggests ways to navigate and use the framework, a Roadmap for future work to enhance the Framework and its use, and the first two AI RMF 1.0 crosswalks with key AI standards and US and EU documents.

    NIST plans to work with the AI community to update the framework periodically and welcomes suggestions for additions and improvements to the Playbook at any time.

    Comments received through February 2023 will be included in an updated version of the Playbook to be released in spring 2023.

    Sign up to receive email notifications about NIST's AI activities here or contact us at: [email protected]. Also, see information about how to engage in NIST's broader AI activities.





    ------------------------------
    Michael Roza CPA, CISA, CIA, CC, MBA, Exec MBA
    ------------------------------