Use Case 5: Generative AI Data Protection for ChatGPT and Copilot

Prevent accidental sharing of sensitive data during AI-assisted interactions with browser-based DLP and real-time remediation like Warning, Blocking, Auditing, Justifying

Use Case 5: Generative AI Data Protection for ChatGPT and Copilot

The Problem

Generative AI tools like ChatGPT and GitHub Copilot have become integral to productivity, but their use introduces significant data security risks. Employees may inadvertently input sensitive data such as PII, PHI, or proprietary information into these tools, which can then be retained by the AI provider or lead to compliance violations.

Key challenges include:

  • Monitoring and securing interactions with generative AI tools in real time.
  • Preventing employees from sharing sensitive data through AI inputs or outputs.
  • Balancing security enforcement with user productivity and adoption.

Strac’s Solution

Strac provides a browser-based DLP solution that seamlessly integrates with generative AI tools like ChatGPT and Copilot to monitor and secure sensitive data in real time. The solution supports four customizable modes of protection to address diverse organizational needs.

Four Modes of Protection:

  1. Audit Mode:
    • Logs all sensitive data interactions with AI tools without enforcing restrictions.
    • Ideal for monitoring and reporting usage patterns for compliance or internal reviews.
  2. Block Mode:
    • Prevents sensitive data from being entered into or shared via AI tools.
    • Ensures strict enforcement of data security policies.
  3. Warn Mode:
    • Alerts users when sensitive data is detected in their AI interactions, providing real-time feedback to encourage secure usage.
    • Does not block actions, balancing security with workflow continuity.
  4. Confirm Mode (Justification Required):
    • Prompts users to provide a justification when attempting to share sensitive data.
    • Adds an additional layer of accountability and visibility into data-sharing practices.

Additional Features:

  1. Advanced Detection:
    • Identify PII, PHI, PCI, and intellectual property in real-time inputs and outputs.
    • Leverage ML and OCR models to detect sensitive data in free-text, code, and file uploads.
  2. Custom Policies:
    • Configure granular, role-based policies tailored to your organization’s needs.
  3. Compliance Reporting:
    • Generate detailed reports for audits, showcasing AI usage patterns and applied remediation actions.

Customer Case Study

A global consulting firm with 1,000 employees in the USA implemented Strac to monitor ChatGPT usage across its teams. The firm was concerned about employees inadvertently sharing client PII or confidential project details while leveraging generative AI for drafting emails and generating code.

With Strac, they:

  • Enabled Audit Mode to assess data-sharing risks in AI tools during the initial deployment.
  • Transitioned to Warn Mode and Confirm Mode to educate users and collect justifications for sensitive interactions.
  • Finally enforced Block Mode to prevent unauthorized sharing of sensitive data altogether.
  • Detected and addressed over 3,000 instances of attempted sensitive data sharing within the first 90 days.
"Strac's flexible modes allowed us to adapt our AI security strategy as we learned more about our team's usage patterns. It’s the perfect blend of control, awareness, and security for generative AI tools." - Director of Privacy