2 weeks ago - last edited 2 weeks ago
ChatGPT and other AI chatbots have been getting a lot of attention lately. They are extremely helpful in getting information, responding to custom queries, and can even be used to write code along with other common daily tasks. They represent an exciting new digital frontier but from a security perspective, they also represent yet another place for your corporate data to go. Netskope’s inline Next-gen Secure Web Gateway capabilities can block posting of sensitive data to collaboration tools such as Slack and Teams out of the box. It got me to thinking that since ChatGPT is essentially a chat system, why can’t we use a similar capability to block posting sensitive information to ChatGPT? As one quick tangent, I also want to give OpenAI some credit as they also warn the user that it’s not appropriate to put personal data into their platform:
That being said, this is as after I already posted the data and relies on ChatGPT recognizing the data as sensitive which may not be the case with other sensitive data types. As one final note, this article was written based on an “All Web” steering configuration. If you use Netskope for inline CASB (SaaS) than you may need to add additional domains to the custom app we will create later on. Let’s start at the top and walk through different approaches Netskope can take to securing ChatGPT.
The Legacy Proxy Approach: Block the URL
This is the most heavy handed approach but unfortunately, it’s the only way that most Next Generation Firewalls and legacy proxies can handle this case. It’s a simple allow or block mechanism. Netskope can, of course, block the category and URL for ChatGPT with a simple policy. For this case, it’s as simple as creating a URL list within a category and blocking the access:
Now when any of my users attempt to access ChatGPT, they will be blocked:
However, as I mentioned this is a rather heavy handed approach. What if I want my users to have access but I worry about them putting sensitive data into Chat GPT? A traditional proxy doesn’t have an understanding of activities beyond browse, upload, and download so you must either allow or block the activity or rely on third party DLP solutions.
The Slightly Less Traditional Approach: Coach the Users on Using ChatGPT
We can take the previous approach and fine tune it a bit to allow users access to ChatGPT but coach them prior to them posting a message. To do this, we can create a URL list and category with the URL and path that is used to post messages:
We can then create a policy to prompt the user to continue via a User Alert action:
Now my user is allowed to browse to ChatGPT but when they post, they are prompted to accept the risk and confirm that they are going to appropriately handle data:
Once they click Proceed, ChatGPT completes the response:
If they click Stop, then the action is blocked and no response is received:
The Next-Gen Secure Web Gateway Approach: Block Based on Data and Activity Context
If the first approach was a hammer, then the Netskope approach is a scalpel in the hands of a skilled surgeon. Maybe we want our users to be able to use ChatGPT freely but we are still concerned about what they are posting there. After all, ChatGPT can be a powerful productivity tool for daily tasks even for the most technical employees. Take a look at some of the posts on the Sysadmin subreddit as examples. Because the Netskope Web Gateway has an understanding of activities within applications, I can create a custom app for ChatGPT that recognizes a post activity. I created a custom App Definition in Netskope to recognize the POST activity within ChatGPT.
With that done, it’s just a matter of creating a policy to block the sensitive data profiles via Netskope’s DLP engine:
I have two DLP profiles. One is for any amount of PII and the other is for the string “ChatGPT Test String.” With the policy created, I can post harmless or non-sensitive data and get an answer:
When I post the test string I get blocked by Netskope with a customizable block page:
Now for the final test, let’s grab some PII data and try to post it to ChatGPT:
I once again get blocked with a block message informing the user of the violation. The data they tried to upload will be logged via Netskope’s Incident Management and Forensics for administrators to review as well. With a few simple policies we were able to coach our users on appropriate use and prevent movement of sensitive data to a third party service.
a week ago - last edited a week ago
What's the likelihood of Netskope introducing a new Web Category to cover these services? I've had some queries from leadership on my side asking about controlling. For the moment, we've already adopted method 1 above. But as the scope and nature of the sites expands, I see if becoming an expanding hole that we can't keep up with.
a week ago
A request was entered last week to create an App Category for AI based Apps
Ask your Netskope TAM, CSM or account executive to track request # ER-1666 for you.
This has not been accepted to the product roadmap currently but I would imagine that this has a good chance of being added.
Hi Sam Shiflett, great article!
I just have one doubt, for me the ChatGPT only works if i enable a exception in SSL Decryption for *.openai.com.
With that i'm unable to use the Realtime Policy.
In order to view this content, you will need to sign in to your account. Simply click the "Sign In" button belowSign In