Skip to main content
Question

Netskope, Microsoft Sensitivity Labels, and pasted data.

  • November 18, 2025
  • 1 reply
  • 22 views

wilson
Forum|alt.badge.img+11

Summary.
Reviewed https://community.netskope.com/casb-swg-inline-92/netskope-inline-dlp-with-microsoft-information-protect-mip-labels-6942?tid=6942&fid=92

It is clear from this article, that we could label a document Confidential, put in a DLP policy that blocks the label "Confidental" into public AI sites, and that document would be prevented (when used in appropriate specific connector situations, with possible adherence to generic connectors.

But the question is - if someone were to copy and paste text from a labeled document, and directly pasted it into a prompt, would the DLP block still occur?

1 reply

wilson
Forum|alt.badge.img+11
  • Author
  • Explorer III
  • November 18, 2025

This is what Chat GPT says:

 

if using microsoft purview and sensitivity labels, one can prevent the use of a document from being utilized from an AI site. But if data from within the document is copied, can it be pasted into an AI prompt if using a product like Netskope?

ChatGPT said:

Short answer: Yes, it can be blocked — if Netskope’s DLP policies are configured to inspect clipboard/paste actions into AI sites. But it depends entirely on how your organization configures Netskope.

Here’s the breakdown:

 

✅ How Purview + Sensitivity Labels Work

Sensitivity labels (with encryption) prevent:

  • Opening a protected file in unauthorized apps
  • Uploading the protected file to AI/chatbot sites
  • Copying content out may be restricted depending on “Copy/Extract” rights

BUT — if the user is permitted to view the file, then Purview cannot fully stop them from manually copying text out. Purview isn’t a clipboard monitor.

So Purview alone does not prevent someone from pasting sensitive text into ChatGPT, Gemini, Copilot, etc.

 

✅ Where Netskope Comes In

Netskope’s Cloud Access Security Broker (CASB) + DLP engine can detect and block:

1. Clipboard events into AI websites

Netskope can inspect:

  • Text being typed into forms
  • Text being pasted into form fields
  • Text sent in POST requests

If the pasted or typed content matches:

  • DLP rules
  • Keywords
  • Sensitive-pattern detectors (PHI, PII, financial, etc.)
  • Purview label classifications synced to Netskope

…then Netskope can block it with a coaching message or hard block.

So if Netskope DLP is properly configured, a user copying data from a labeled document and pasting it into ChatGPT can be blocked.

 

🧩 The key requirement

Your company must have configured Netskope to:

1. Monitor Generative AI domains

Examples:

  • chatgpt.com
  • bard.google.com / gemini.google.com
  • copilot.microsoft.com
  • claude.ai
  • perplexity.ai

2. Enable Inline DLP for:

  • Clipboard paste
  • Web form data
  • API requests (POST/PUT)

3. Use DLP policies that recognize the content

This can include:

  • Document fingerprints
  • Machine-learning classifiers
  • Regex patterns
  • Mirrored Microsoft Purview sensitivity classifications (Netskope supports this)

Without these settings, Netskope will not block the pasted content.

 

❓ So what happens if everything is configured properly?

Scenario:
You copy sensitive data from a protected Word file → paste it into ChatGPT.

Outcome with proper Netskope DLP:

  • Netskope intercepts the POST request
  • Sees sensitive content
  • Blocks the request
  • Shows a justification prompt or policy block message

Outcome without proper DLP config:

  • The paste will go through
  • Purview cannot stop it alone
 

⭐ Bottom line

Purview sensitivity labels protect files.
Netskope protects data movement.
🔒 Using both together CAN prevent copying sensitive data into AI tools — but only if Netskope’s inline DLP policies are correctly configured.