Shadow AI Policy: How to Find, Govern, and Manage Unauthorised AI Use
A 2024 Salesforce survey found that 55% of employees use AI tools their employers have not approved. A 2025 follow-up put that figure at 68%. Shadow AI is not a future problem — it is happening in your organisation right now.
What Is Shadow AI?
Shadow AI refers to AI tools and AI-enabled features that employees use for work without formal IT approval, procurement review, or legal sign-off. It is the AI equivalent of shadow IT — and it carries the same risks, plus several new ones unique to AI systems.
Common shadow AI patterns include:
- Employees with personal ChatGPT Plus or Claude Pro subscriptions using them for work tasks
- AI writing assistants like Grammarly, Notion AI, or Jasper used without IT visibility
- GitHub Copilot or Cursor IDE licensed individually by developers
- AI-powered browser extensions (summarisers, translators, writing helpers)
- AI features embedded in tools employees already use — Zoom AI Companion, Slack AI, Canva AI
- Consumer AI tools used for sensitive tasks: customer email drafting, contract review, code generation
Why Shadow AI Creates Real Legal Risk
GDPR data breach
An employee pastes customer names, emails, or health data into an unapproved AI tool. That tool's training pipeline, logging, or third-party subprocessors may process that data in ways your privacy notice never disclosed.
EU AI Act deployer liability
Under the EU AI Act, your company is the "deployer" when your employees use AI in business processes — even AI they found themselves. If that AI makes consequential decisions (HR, credit, insurance), Article 29 obligations fall on you.
Confidentiality and IP loss
Employees pasting proprietary code, deal terms, client strategies, or trade secrets into AI tools risk that data being used for model training or accessible to the provider's staff.
Output liability
AI-generated content presented as authoritative (legal analysis, medical information, financial projections) without disclosure creates professional liability — and may violate EU AI Act Article 52 transparency requirements.
Vendor lock-in and data loss
Work product stored in unapproved tools may not be recoverable if the tool shuts down or the employee leaves. Business continuity is often an afterthought with shadow AI.
How to Find Shadow AI in Your Organisation
Before you can govern shadow AI, you need to know what is actually being used. Six discovery methods that work:
- 1
DNS/proxy logs: look for requests to known AI domains (api.openai.com, api.anthropic.com, generativelanguage.googleapis.com, api.mistral.ai)
- 2
Browser extension audit: AI extensions in employee browsers (Grammar tools, writing assistants, code completions) are often shadow AI vectors
- 3
SaaS discovery tools: software like Torii, Zylo, or BetterCloud can detect AI SaaS subscriptions charged to corporate cards
- 4
Expense report review: individual subscriptions to ChatGPT Plus ($20/mo), Claude Pro ($20/mo), or Perplexity Pro ($20/mo) on expense reports
- 5
Annual IT survey: ask employees directly — most will disclose AI tools if you make it easy to do so without fear of punishment
- 6
Code repository scanning: GitHub Copilot or Cursor IDE usage shows up in git commit metadata and IDE telemetry
ComplianceIQ browser extension
The ComplianceIQ browser extension automatically detects AI tools in use across your organisation and syncs them to your AI system inventory — including tools employees use without IT approval. It is the fastest way to get a complete picture of your shadow AI footprint.
The Three-Tier AI Tool Policy
The most practical shadow AI policy uses a traffic-light tier system. Blanket bans do not work — employees route around them. Blanket permission is not an option — the legal risk is real. A tiered approach gives employees clarity and gives legal a defensible position.
Tier 1 — Approved
AI tools that have passed procurement review, DPA/BAA signed, GDPR assessment completed. Employees can use these freely for work.
Examples
- •Microsoft Copilot (M365 integration)
- •Company-deployed Claude instance with BAA
- •Approved code completion tool
Tier 2 — Conditional
AI tools employees may use for specific, low-risk tasks only. No customer data, no internal documents, no code. Approval not required but restrictions apply.
Examples
- •ChatGPT free/Plus (public information research only)
- •Grammarly (non-confidential text only)
- •Image generation tools (public-domain prompts only)
Tier 3 — Prohibited
AI tools that may not be used for any company-related work. Either inadequate data protection, no BAA available, or unacceptable training data practices.
Examples
- •Any tool with no DPA available
- •Tools that explicitly train on inputs by default
- •Tools with no EU data residency option for GDPR-scope data
What Your Shadow AI Policy Must Include
A policy that says "employees may not use unauthorised AI tools" is not a policy — it is a wish. An effective shadow AI policy covers:
Scope
Which AI tools count. Include: standalone AI apps, AI features embedded in existing tools, AI APIs called by employee scripts, AI browser extensions, and personal AI subscriptions used for work.
Data classification rules
Specify which data types may never enter unapproved tools: personal data of customers or employees, protected health information, confidential client information, source code in proprietary systems, financial forecasts and M&A information.
Approval process
A fast-track approval path for new AI tools (target: 5 business days for standard SaaS, 10 days for tools processing personal data). Without this, employees route around the policy.
Disclosure requirements
When employees use AI to generate customer-facing content, do they need to disclose? EU AI Act Article 52 requires disclosure for some AI outputs. Your policy should match the legal requirement plus your brand standards.
Enforcement and consequences
What happens on first violation, repeat violations, and serious violations (e.g., customer data in an unapproved tool). Without consequences, the policy has no teeth.
Amnesty period
When launching a new shadow AI policy, give employees 30 days to self-disclose current usage without penalty. This surfaces your full shadow AI footprint faster than any technical discovery.
Shadow AI and GDPR: The Data Processor Problem
Under GDPR, when an employee sends personal data to a third-party AI tool, that tool becomes a data processor. Article 28 requires that every data processor be covered by a Data Processing Agreement (DPA).
Shadow AI tools, by definition, have not been through your DPA process. This means every instance of an employee sending customer data to an unapproved AI tool is a potential Article 28 violation — and potentially a data breach requiring notification under Article 33 if the tool's terms allow training on inputs.
What regulators have said
Italy's Garante blocked ChatGPT for GDPR violations in 2023. The Irish DPC opened an inquiry into OpenAI in 2023. Samsung banned ChatGPT internally after employees leaked chip design data. Your shadow AI policy is your defence against the same exposure.
30-Day Implementation Plan
Discovery — run all 6 detection methods. Build your shadow AI inventory. Do not take any action yet.
Categorise each discovered tool into Tier 1/2/3. For Tier 1 candidates, initiate DPA and privacy review.
Draft policy using three-tier framework. Get legal sign-off. Prepare communication materials.
Launch amnesty period. Announce policy. Provide approved alternatives for tools being banned. Run 30-minute lunch-and-learn.
Quarterly shadow AI scans. Fast-track approval process open. Monthly review of Tier 2/3 decisions as tool landscape evolves.
Map your shadow AI footprint
ComplianceIQ's browser extension automatically detects AI tools in use across your organisation and flags ones that haven't been approved.
Start free — no credit card