I mostly agree with the framing but I'd suggest considering one other dimension that I've recently been providing guidance to our business on. It might be thought of as a counterpoint to your bullet 4) or as a 5) but it's when is it inappropriate to use ChatGPT. As it is evolving as a potential tool in the zeitgeist, people who don't necessarily understand the vagaries are finding it can be useful to do complex things. One life hack we came across for a sales team recently was to upload all of your notes (often subject to NDA) from a sales call to chatGPT after the meeting to quickly summarize and create a follow-up email to send out right after you're done. Great time saving tip and completely inappropriate use of company restricted and confidential information. And this is sales, I hate to think of what the finance, HR, legal or engineering life hacks people will come up with will be. Of course we might move soon to a world where this is a adequately licensed tool that has proper data governance and can deal reasonably with data protection that we then provide to employees to use in exactly this fashion. But I'm not even sure that is a complete option yet (though to be honest I haven't explored what there commercial licensing looks like yet).
In the more general sense, it is where to balance the risk of sharing sensitive information, and how will that information be protected against the rewards from the new scenarios that it can unlock. And how much can OpenAI, Azure and others be pushed to prioritize data governance for the data sets that are necessarily shared to enable these scenarios.
It's not the first time that a new technology that offers a B2C option has been used inappropriately as part of business but it's ability to use data to create truly compelling results increases the risk that folks are going to use it without thinking clearly about what they are doing.
------------------------------
Peter Oehlert
Chief Security Officer
Highspot
------------------------------
Original Message:
Sent: Jan 24, 2023 07:47:38 AM
From: Jim Reavis
Subject: ChatGPT Research
Hi All,
I would appreciate the community helping us to think through what CSA's approach to research should be in light of the quick uptake of ChatGPT. I know ChatGPT is not unique in the world, but it certainly has reached mainstream and caught the attention of some of the smartest people I follow in our industry. I believe the attention it is currently getting is going to help us build better AI/ML security best practices and I think CSA should put together a white paper in short order as part of a longer term research effort. It seems to me the four dimensions are: 1) How malicious actors can use it to create new and improved cyberattacks, 2) How defenders can use it to improve cybersecurity programs, 3) How it can be directly attacked to produce incorrect or otherwise bad results and finally, 4) How to enable the business to use it securely.
I appreciate any input you have on how I am framing this and any anecdotes you want to share!
------------------------------
Jim Reavis CCSK
Cloud Security Alliance
Bellingham WA
------------------------------