How to Write an AI Acceptable Use Policy
Most companies have an Acceptable Use Policy for technology. Very few have one specifically for AI. With EU AI Act Article 4 in force and GDPR obligations applying to AI data processing, the gap is a compliance risk. This is the template.
Why Every Organisation Needs an AI AUP in 2026
Three developments have made an AI-specific Acceptable Use Policy a compliance necessity:
EU AI Act Article 4 (in force February 2025)
Requires deployers to have policies enabling appropriate AI literacy. An AUP is the foundational document for demonstrating this obligation has been met.
GDPR accountability (Article 24)
Demonstrating compliance requires documented policies. When regulators investigate AI incidents, the first document they request is your AI governance policy.
Samsung / confidentiality incidents
Samsung employees uploaded proprietary chip designs to ChatGPT. Amazon employees uploaded internal code. Without a policy, employees do not know the rules — and the company has no defence.
The 7 Required Sections — With Template Language
Every AI Acceptable Use Policy needs these seven sections. The template language below can be adapted to your organisation — replace [Company] with your name.
1. Purpose and scope
RequiredMust address:
- Why the policy exists (legal compliance, risk management, ethical use)
- Who it applies to (employees, contractors, temporary staff, third parties acting on behalf of the company)
- What "AI systems" means for your company (include: standalone tools, AI features in existing software, personal AI subscriptions used for work, AI browser extensions)
Example language:
"This policy applies to all [Company] employees, contractors, and third parties when using AI systems in connection with their work for [Company]. It covers all AI tools, including but not limited to: generative AI assistants, AI-powered writing tools, AI code completion tools, and AI features embedded in software used for business purposes."
2. Approved AI tools
RequiredMust address:
- Tier 1 (approved): tools cleared through procurement, DPA signed
- Tier 2 (conditional): tools permitted for specific limited uses
- Tier 3 (prohibited): tools that may not be used for company work
- Process for requesting approval of new AI tools
Example language:
"A current list of Tier 1 approved tools is maintained on the intranet. Employees may use Tier 1 tools without additional approval. Employees wishing to use an AI tool not on the approved list must submit a request through IT. Unapproved tools must not be used for company work pending review."
3. Data classification rules
RequiredMust address:
- What data categories may NEVER be entered into any AI tool (customer personal data, health data, financial data, trade secrets)
- What data may be used with Tier 1 approved tools only
- What data may be used with any tool (public information, general research)
Example language:
"The following data must never be entered into any AI tool without explicit written approval from [Data Protection Officer/Legal]: personal data of customers or employees; protected health information; confidential client information; source code from proprietary systems; terms of active negotiations; material non-public information."
4. Permitted uses
RequiredMust address:
- Specific use cases that are permitted (drafting internal documents, code generation, research)
- Use cases that require additional approval (customer-facing content, legal or medical advice, HR decisions)
- Use cases that are prohibited entirely (generating content to deceive, creating fake identities)
Example language:
"Permitted uses include: drafting internal communications, summarising documents, writing code for internal tools, generating ideas for review by a human expert. Prohibited uses include: making final employment decisions without human review, generating content presenting AI output as human-created without disclosure, processing customer personal data for AI model training."
5. Human oversight and verification
RequiredMust address:
- AI outputs must be reviewed by a qualified human before reliance
- Who is responsible for verifying AI outputs in specific contexts (legal, medical, financial)
- How to report AI errors or unexpected outputs
Example language:
"All AI-generated content used in external communications, legal documents, financial reports, or medical recommendations must be reviewed and approved by a qualified professional before use. Employees remain responsible for the accuracy of content they submit, regardless of whether AI was used in its creation."
6. Disclosure obligations
RequiredMust address:
- When to disclose AI use to customers or clients
- Disclosure requirements for AI-generated content under EU AI Act Article 52
- Internal disclosure: how to flag AI-generated work in collaborative documents
Example language:
"Employees must disclose AI use when: creating content for external publication, generating analysis for clients, creating text that may be mistaken for human-written content in a context where this matters. For customer-facing AI interactions, the customer must be informed they are interacting with an AI system unless it is obvious."
7. Consequences of non-compliance
RequiredMust address:
- What constitutes a policy violation
- Escalation path for suspected violations
- Consequences — linked to existing disciplinary policy
- How employees can raise concerns without fear of retaliation
Example language:
"Violations of this policy will be addressed under [Company]'s standard disciplinary process. Serious violations — including use of unapproved tools to process customer data — may result in immediate suspension pending investigation. Employees should report suspected violations to [IT/Legal/Compliance] without fear of retaliation."
5 Common Mistakes to Avoid
Blanket ban on all AI tools
Why it fails: Impossible to enforce. Employees route around it. You lose visibility into what is actually being used.
Fix: Use a tiered approach: approved / conditional / prohibited. Give employees legitimate options.
No process for requesting new tools
Why it fails: Employees will use unapproved tools if the approval process does not exist. Demand creates shadow AI.
Fix: Create a fast-track approval path (5 business days target). Make it easy to do the right thing.
Data rules that are too vague
Why it fails: "No sensitive data" is not a rule — it requires employees to make judgment calls they are not equipped to make.
Fix: List specific data categories by name. If you have a data classification scheme, reference it explicitly.
Policy not reviewed after AI landscape changes
Why it fails: New AI tools launch monthly. A policy written in 2023 is likely already out of date.
Fix: Build in a quarterly review cycle. Assign a named owner who is responsible for currency.
No mention of embedded AI features
Why it fails: Employees think the policy only applies to standalone AI tools — not to AI features in Word, Zoom, Slack, or Salesforce.
Fix: Explicitly state the policy covers AI features in existing tools, not just dedicated AI products.
Policy Rollout Checklist
Generate your AI policy in ComplianceIQ
ComplianceIQ generates AI Acceptable Use Policy drafts tailored to your jurisdiction, industry, and AI system inventory — with all 7 required sections pre-populated.
Start free