G2 Logo

How a Digital Bank Reduced Shadow AI Risk by 80% - Without Blocking Innovation

When a fast-scaling digital bank began seeing widespread employee adoption of generative AI tools like ChatGPT and Gemini, their security team faced a growing dilemma: how do you protect sensitive data without shutting down innovation?

Despite having a mature security stack — including SIEM, DLP, CASB, and MDM - the team lacked clear visibility into how AI was being used, especially when accessed through personal accounts on corporate devices. Risky behaviours, like copying internal code into AI tools or reusing passwords across unmonitored SaaS apps, weren’t being detected in real time. And even when alerts were triggered, the signal-to-noise ratio was poor — overwhelming the SecOps team and eroding trust in the data.

The bank needed a new approach: not just another tool to block usage, but a smarter, more nuanced way to enable AI adoption safely and visibly across the business.

From Risk to Resilience: Enabling GenAI Without Fear

By deploying our secure AI usage enablement platform, the bank was able to quickly gain visibility and control over employee AI behaviour - without slowing productivity.

The first step was understanding who was using AI, how, and where. Our platform provided real-time telemetry on prompt-level activity, flagging use of generative AI tools via personal accounts on corporate machines - a major data loss vector that existing tools couldn’t see.

Next, the security team created role-based policies. Sets of employees from specific departments licensed to use Gemini were placed in an “approved” group, while unlicensed users remained under stricter monitoring. This conditional logic allowed the platform to suppress noise and reduce false positives to single digits, while still escalating the risks that truly mattered.

Real-Time Nudging, Not Blocking

Rather than punishing users or forcing them to work around controls, the bank used real-time browser nudges to guide behaviour at the moment of risk. If someone tried to paste source code into an AI tool, for example, a discreet message explained the risk and offered a course-correct.

Over the first month, the platform delivered more than 2,000 real-time nudges, helping employees learn what was safe and what wasn’t, without slowing them down.

Finding the Blind Spots

In parallel, the platform’s SaaS visibility layer surfaced several AI-enabled tools that had been adopted organically across the business. In one instance, more than 100 employees had accessed a new AI-powered productivity app that wasn’t part of the approved stack. The security team received an alert before the tool gained deeper traction, allowing them to assess the risk and take appropriate action - all within a few days, rather than weeks.

Results That Matter

In just a few weeks, the bank saw tangible results:

  • Shadow AI usage dropped by 80%, as employees shifted from personal to approved accounts.

  • DLP noise from GenAI tools was eliminated, thanks to contextual alerting and group-based policies.

  • Time-to-detection for new SaaS tools improved by 70%, giving the team a clear edge on managing shadow IT.

  • More than 2,000 in-the-moment nudges helped employees course-correct without friction or delay.

Buoyed by the early success, the bank is now planning to expand deployment across customer support and operations, and introduce deeper integrations into their security tech stack.

Their journey reflects a broader shift happening across the industry: from blocking AI to enabling it — responsibly, safely, and at scale.

Want to understand how Culture AI can help your business? Book a call with our team