The Inner Circle

 View Only
Expand all | Collapse all

ChatGPT Research

  • 1.  ChatGPT Research

    Posted 5 days ago
    Hi All,

    I would appreciate the community helping us to think through what CSA's approach to research should be in light of the quick uptake of ChatGPT. I know ChatGPT is not unique in the world, but it certainly has reached mainstream and caught the attention of some of the smartest people I follow in our industry. I believe the attention it is currently getting is going to help us build better AI/ML security best practices and I think CSA should put together a white paper in short order as part of a longer term research effort. It seems to me the four dimensions are: 1) How malicious actors can use it to create new and improved cyberattacks, 2) How defenders can use it to improve cybersecurity programs, 3) How it can be directly attacked to produce incorrect or otherwise bad results and finally, 4) How to enable the business to use it securely.

    I appreciate any input you have on how I am framing this and any anecdotes you want to share!


    ------------------------------
    Jim Reavis CCSK
    Cloud Security Alliance
    Bellingham WA
    ------------------------------


  • 2.  RE: ChatGPT Research

    Posted 4 days ago
    Hello Jim,

    happy to help you in this. I am one of a few experts in Microsoft working between AI/ML and Security, including the Responsible AI (term for explainability under ethic terms). Quite engaged in our Zero Trust working groups here together with your collogue Eric Johnson.


    Kindest regards,
    Lars

    ------------------------------
    Lars Ruddigkeit
    Account Technical Strategist Swiss FedGov
    Microsoft Switzerland
    ------------------------------



  • 3.  RE: ChatGPT Research

    Posted 4 days ago
    Greetings.  Since the CBS Sunday Morning story there many are concerned about originality.  My take is that academic writing requires the identification of sources through references and in-text citations.  If students use this feature (APP) I would require the source.  ChatGPT would not be an adequate source. 

    If Microsoft adds ChatGPT, how will originality be assessed?

    ------------------------------
    [Ron] [Martin] [Ph.D.]
    [Professor of Practice]
    [Capitol Technology University
    [rlmartin1@captechu.edu]
    ------------------------------



  • 4.  RE: ChatGPT Research

    Posted 4 days ago
    A lot to ponder here Ron. Early users tell me that they are using ChatGPT to create some inspiration or a template for original works they must create. I would think a researcher would need to manually add citations for any factual statement a report from ChatGPT made.

    ------------------------------
    Jim Reavis CCSK
    Cloud Security Alliance
    Bellingham WA
    ------------------------------



  • 5.  RE: ChatGPT Research

    Posted 4 days ago
    Yes, that is true.  The problem is how to give credit to where ChaGPT derived the information.

    --
    Dr. Ron Martin, CPP





  • 6.  RE: ChatGPT Research

    Posted 4 days ago
    Thanks Lars, would love your help. Do you think I framed the paper in a reasonable way?

    ------------------------------
    Jim Reavis CCSK
    Cloud Security Alliance
    Bellingham WA
    ------------------------------



  • 7.  RE: ChatGPT Research

    Posted 4 days ago
    I mostly agree with the framing but I'd suggest considering one other dimension that I've recently been providing guidance to our business on. It might be thought of as a counterpoint to your bullet 4) or as a 5) but it's when is it inappropriate to use ChatGPT. As it is evolving as a potential tool in the zeitgeist, people who don't necessarily understand the vagaries are finding it can be useful to do complex things. One life hack we came across for a sales team recently was to upload all of your notes (often subject to NDA) from a sales call to chatGPT after the meeting to quickly summarize and create a follow-up email to send out right after you're done. Great time saving tip and completely inappropriate use of company restricted and confidential information. And this is sales, I hate to think of what the finance, HR, legal or engineering life hacks people will come up with will be. Of course we might move soon to a world where this is a adequately licensed tool that has proper data governance and can deal reasonably with data protection that we then provide to employees to use in exactly this fashion. But I'm not even sure that is a complete option yet (though to be honest I haven't explored what there commercial licensing looks like yet).

    In the more general sense, it is where to balance the risk of sharing sensitive information, and how will that information be protected against the rewards from the new scenarios that it can unlock. And how much can OpenAI, Azure and others be pushed to prioritize data governance for the data sets that are necessarily shared to enable these scenarios.

    It's not the first time that a new technology that offers a B2C option has been used inappropriately as part of business but it's ability to use data to create truly compelling results increases the risk that folks are going to use it without thinking clearly about what they are doing.

    ------------------------------
    Peter Oehlert
    Chief Security Officer
    Highspot
    ------------------------------



  • 8.  RE: ChatGPT Research

    Posted 4 days ago
    This is all fair Peter, we need to think about how to incorporate this and have guidance about using ChatGPT securely that articulates when it is inappropriate to use at all. ChatGPT does not overrule compliance mandates to protect information and it would clearly be wrong to train ML systems with PII for example.

    ------------------------------
    Jim Reavis CCSK
    Cloud Security Alliance
    Bellingham WA
    ------------------------------



  • 9.  RE: ChatGPT Research

    Posted 3 days ago
    Hi all
    • The issue is not so much about Chat GPT, but with AI tools generically.  And AI is nowhere near as powerful as ML tools
    • Users taking company data and sending it to a third party provider, SaaS provider or any other external entity need to be governed by company rules that control these actions.  Its clearly not that sensible to take company data and send it anywhere in the web, not to mention all the  laws and regulations that relate here 
    • There is nothing in IT that cant be properly controlled and used in an appropriate and safe manner, if there is the will to do so!  There is no social media platform that cant eliminate any combination of hate speech or anything else if they want to, instantly, and automatically 
    • It needs to be up to the AI platforms to regulate users, follow ethical policies & remove those users accounts if they violate those policies? 
    • Or if this doesn't work out it will end up with legislators
    KR



    ------------------------------
    Emilio Mazzon CISM, CISA, CEng, CISM, CITP, CSA Board Director
    VP
    SNCL
    ------------------------------



  • 10.  RE: ChatGPT Research

    Posted 3 days ago
      |   view attached
    The National Institute of Standards and Technology just released the attached publication.

    It provides a general view of AI Risks
    --
    Dr. Ron Martin, CPP



    Attachment(s)

    pdf
    NIST.AI.100-1.pdf   1.85 MB 1 version