Shadow AI Discovery: What Your Organisation Doesn't Know It's Using

Jason Holloway
shadow-ai-discovery shadow-ai-detection unsanctioned-ai-tools

We regularly speak with security leaders who express the same concern: “We know our people are using AI tools, but we don’t know which ones or how many.” This uncertainty reflects the reality of how AI has proliferated across organisations. Unlike traditional software rollouts, AI adoption has often been bottom-up, employee-driven and largely invisible to IT teams.

Research consistently shows that most knowledge workers now use generative AI tools, yet many organisations struggle to account for even half of the AI tools their employees rely on daily. This gap between actual usage and visibility creates genuine security risks that require systematic discovery, not guesswork. Shadow AI discovery requires a systematic approach across four data sources: network traffic analysis, browser and application logs, staff surveys and endpoint detection tools. Understanding what AI tools your organisation is actually using is the essential first step before any AI Security Gap Analysis can begin.

Shadow AI discovery is not about catching people breaking rules. It’s about understanding your actual AI environment so you can manage it effectively. A systematic approach begins with understanding where to look and what signals to track.

Start with network traffic analysis

Your network infrastructure holds the first clues about AI tool usage. Most shadow AI detection methods begin with examining outbound traffic patterns to identify connections to known AI services.

By reviewing your network and host device internet traffic to major AI platforms, an initial assessment will highlight the scale of the Shadow AI usage.

Examine browser and application data

Modern browsers store extensive data about user activity that can reveal AI tool usage. Browser history analysis across corporate devices will show which AI platforms employees access most frequently. This works particularly well for web-based AI tools, which represent the majority of Shadow AI usage.

Application logs from collaboration platforms also contain valuable signals. Microsoft Teams, Slack and similar tools often integrate with AI services. For example, meeting notetakers are often overlooked, but can lead to serious and inadvertent data egress to unknown actors and jurisdictions. Review integration logs and third-party application permissions to identify AI tools that employees have connected to your collaboration stack.

If your policies allow, email gateway logs can reveal AI-related service notifications, password reset requests and subscription confirmations, all indicators of active AI tool accounts within your organisation.

Survey your technical teams first

Technical teams typically adopt AI tools earlier and more extensively than other departments. Survey your development, IT support and data analysis teams about their AI tool usage before expanding organisation-wide.

This approach serves two purposes: technical teams can provide detailed information about the tools they use, and they often influence AI adoption patterns across other departments. Understanding their usage helps predict where Shadow AI is likely to emerge elsewhere.

Ask specific questions about coding assistants (GitHub Copilot, Claude Code, OpenAI Codex, Amazon CodeWhisperer), data analysis tools and automated testing platforms. These tools are particularly common in technical environments and often connect to external AI services.

Deploy endpoint detection strategically

Endpoint detection and response (EDR) tools can identify AI-related applications installed on corporate devices. Configure your EDR system to monitor for AI application installations, browser extensions related to AI services and unusual network communication patterns from endpoints.

Focus particularly on browser extensions. Many AI tools operate through browser extensions that employees install without IT oversight. These extensions often have broad permissions to access web content and can represent significant data exposure risks.

Cloud access security brokers (CASB) provide another detection layer if your organisation uses them. CASBs can identify shadow AI usage by monitoring API calls and data transfers to unmanaged cloud services.

A four-week discovery framework

Use this framework to structure your shadow AI discovery efforts.

Week 1: Data collection

  • Export 30 days of firewall and proxy logs
  • Generate DNS query reports for AI-related domains
  • Collect browser history data from sample devices
  • Review email gateway logs for AI service notifications

Week 2: Technical team survey

  • Survey development teams about coding assistants
  • Survey data teams about analysis and visualisation tools
  • Survey IT support about automation and chatbot tools
  • Document integration points with existing systems

Week 3: Pattern analysis

  • Identify most frequently accessed AI platforms
  • Map usage patterns by department and role
  • Correlate network traffic with user accounts
  • Flag high-risk or unknown AI services

Week 4: Shadow AI inventory

  • Compile a comprehensive list of identified AI tools
  • Categorise tools by risk level and business function
  • Document data flows and integration points
  • Create a baseline for ongoing monitoring

Address the compliance dimension

Three regulatory frameworks apply to most UK organisations discovering shadow AI: UK GDPR (which applies as soon as any AI tool processes personal data), the EU AI Act (for organisations operating in or selling to EU markets) and sector-specific guidance from the FCA and ICO.

Financial services organisations need to identify AI tools that might process customer financial data. Healthcare organisations must flag any AI tools that could access patient information.

Understanding which AI Act obligations could apply to your operations requires legal guidance. Consult your legal counsel for specific compliance requirements under the EU regulation.

Consider reviewing your data protection impact assessments (DPIAs) against your discovered AI tools, as some shadow AI tools may process personal data in ways that could require DPIA updates. Consult your data protection officer or legal counsel for specific requirements.

Build ongoing detection capabilities

Shadow AI discovery is not a one-time exercise. New AI tools emerge constantly, and employee adoption patterns evolve rapidly. Build detection capabilities into your AI Security monitoring programme.

Configure automated alerts for connections to new AI services. Update your acceptable use policies to require disclosure of AI tool usage. Establish regular review cycles to reassess your AI tool inventory.

Consider implementing an AI tool approval process that makes it easier for employees to request access to AI tools through official channels rather than adopting them independently.

Run an AI Amnesty to surface hidden usage and adopt a Just Culture approach to user engagement. Identify the tools and use cases, replacing with approved alternatives where possible, but also examine whether the productivity gains can be replicated elsewhere.

Move beyond discovery to risk management

Discovery reveals what AI tools your organisation uses, but it doesn’t assess the risks they represent or help you manage them effectively. Different AI tools present different risk profiles depending on their data handling practices, security controls and integration points with your systems.

Our post on what an AI Security gap analysis covers explains what to expect from this structured assessment and how organisations use the findings.

Once you’ve completed your Shadow AI discovery, the logical next step is comprehensive risk assessment. Our AI Security Gap Analysis provides structured evaluation of your discovered AI tools against security frameworks and regulatory requirements. This assessment helps prioritise which shadow AI tools require immediate attention and which can remain in use with appropriate controls.

Understanding what AI tools your organisation uses is the foundation of effective AI governance. After all, you can’t manage what you can’t measure. The discovery process requires systematic effort, but it’s essential preparation for managing AI risks across your organisation.

Common questions about shadow AI discovery

What is shadow AI and why is it a security risk?

Shadow AI refers to AI tools, applications and services that employees use without IT or security team oversight. It creates security risk because these tools often process sensitive business data, including customer information, financial records and internal communications, without the data governance, access controls or vendor due diligence that sanctioned AI tools undergo. Most organisations discover significantly more shadow AI usage than they anticipated, and the risk profile varies considerably between tools.

How quickly can an organisation complete a shadow AI discovery?

A structured shadow AI discovery exercise typically takes four weeks: one week for data collection from network and browser logs, one week for technical team surveys, one week for pattern analysis and one week to compile the AI tool inventory. Organisations with existing SIEM infrastructure and cloud access security brokers already in place can often complete the data collection phase in days rather than a week. Timeline extends for large or geographically distributed organisations with limited logging infrastructure.

What should we do with AI tools we discover that aren’t approved?

The first step is risk assessment: not all unsanctioned AI tools carry equal risk. Tools that process only non-sensitive information with strong vendor security practices may be approved retrospectively with appropriate controls. Tools that access sensitive data, lack transparent data handling practices, or connect to external services without oversight typically require immediate remediation or formal risk acceptance. An AI Security Gap Analysis provides the structured framework for making these decisions consistently across your full inventory.

Does shadow AI discovery require specialist tools, or can we use existing infrastructure?

Most organisations can conduct an initial shadow AI discovery using existing security infrastructure: firewall and proxy logs, SIEM platforms, EDR tools and standard browser management capabilities. Specialist CASB solutions configured for AI service detection improve the speed and completeness of discovery but are not prerequisites for a first exercise. Start with what you have and identify gaps in detection capability as part of the discovery output.

At what point does shadow AI become a regulatory compliance issue?

Shadow AI becomes a compliance issue as soon as it processes personal data covered by UK GDPR, which most AI tools do. Employees submitting customer data to an unvetted AI tool without a data processing agreement in place creates a potential GDPR breach. Under the EU AI Act, organisations deploying high-risk AI systems have additional compliance obligations that extend to shadow AI usage. The ICO has published guidance on AI and data protection that applies directly to these scenarios.

Move Beyond Discovery

Once you know what AI tools your organisation is using, our AI Security Gap Analysis assesses the risks they represent and helps you manage them with confidence.