How an International Law Firm Prevented 98% of High-Risk GenAI Submissions Without Locking Down Innovation
Client: International Law Firm
Industry: Legal & Professional Services
Employees: ~600
Location: Europe
The Challenge
As generative AI tools such as ChatGPT became widely available, employees at a prominent international law firm began using them informally across functions - drafting documents, updating CVs, and exploring ways to automate research. While the firm had implemented its own internal AI assistant based on OpenAI, leadership knew that unsanctioned usage of external tools - also known as shadow AI - was growing fast.
The firm’s CISO recognised that these tools were being used with good intent but posed significant risks. Confidential client data, PII, and even fragments of source code could potentially be exposed to public AI models, breaching privacy obligations and regulatory standards. At the same time, a blanket ban on GenAI tools was seen as overly restrictive and likely to drive such behaviour further underground.
“We were caught between risk and reality,” the CISO explained. “Our people wanted to experiment with AI to work more efficiently. We just needed to ensure it was being done safely.”
What the firm needed was a way to see exactly what tools were being used, identify when risk was introduced, and guide employees in real time, all without blocking innovation or adding friction to daily work.
The Solution
The firm partnered with CultureAI to implement a secure AI usage enablement strategy that combined visibility, proactive protection, and behaviour change at scale.
CultureAI was deployed to track GenAI activity across the web, including both sanctioned and shadow tools. Usage was categorised and surfaced through a central dashboard, allowing the security team to identify which GenAI applications were being used, by whom, and for what types of tasks. Crucially, tools like ChatGPT, Claude, and Gemini were flagged automatically, and new ones could be detected as soon as they appeared.
To tackle high-risk behaviours, CultureAI introduced real-time controls that could detect when users were about to paste sensitive information - such as PII, source code, or confidential client data - into GenAI tools. These prompts were either blocked automatically or intercepted with a customised in-browser coaching message explaining the risk and suggesting safer alternatives, such as the firm’s internal AI assistant.
This in-the-moment intervention proved far more effective than after-the-fact alerts. Employees received contextual guidance while working, helping them understand the organisation’s expectations and reinforcing safe AI use.
CultureAI also helped the firm automate triage for critical risks. When source code or regulated data was detected, alerts were routed to the security team via lightweight playbooks directly into existing SIEM workflows. Meanwhile, the platform’s “audit mode” allowed the firm to see broader trends in AI usage, helping the CISO evaluate policies based on real data before enforcing changes.
Privacy concerns were addressed through CultureAI’s privacy-safe approach, which ensured that employee actions were monitored without invasive logging and that no sensitive documents were used to train any models.
The Results
Over the first three months, the firm saw measurable risk reduction and increased employee awareness:
98% of high-risk GenAI submissions (e.g. PII, code) were blocked before reaching public tools
100% of file uploads to GenAI platforms were prevented via browser controls
Over 120 behaviour-based nudges were delivered monthly, with a declining trend as employee understanding improved
20+ shadow AI tools were discovered and categorised, many of which were previously unknown to the security team
Triaged source code incidents increased from 0 to 100%, with automated alerting playbooks ensuring faster investigation
Importantly, the firm did not need to default to a full ban on public GenAI use. Instead, the security team could justify its position with real data, and interventions were tailored to specific behaviours rather than applied indiscriminately.
“Our goal wasn’t to shut everything down,” said the CISO. “We wanted to enable AI safely, not stifle it. CultureAI gave us the confidence to do that.”
What’s Next
With the foundation in place, the firm is planning to expand its approach by:
Building policy tiers tailored to specific roles and data classifications
Enabling GenAI-specific reporting at board level
Deploying CultureAI’s upcoming browser banner functionality more broadly
Extending governance beyond public tools to include internal AI models and Copilot usage
This case demonstrates that even in highly regulated, risk-sensitive environments, organisations can embrace the future of work—so long as they pair visibility with real-time behavioural intervention. With CultureAI, the firm didn’t just reduce AI risk. It helped build a safer, smarter culture of innovation.
Book a call with our team to learn more.