California SB 243 — AI Companion Chatbot Safety Act: AI Compliance Requirements
California SB 243 (signed October 13, 2025, effective January 1, 2026) regulates operators of AI companion chatbots — AI systems designed to simulate sustained, human-like companionship, emotional connection, or romantic relationships. Operators must disclose AI nature clearly and conspicuously when a reasonable person might believe they are interacting with a human. For users known to be minors, the law requires a disclosure every 3 hours of sustained interaction and a reminder to take a break. Operators must maintain suicide/self-harm crisis intervention protocols publicly on their website and submit annual reports to the California Office of Suicide Prevention starting 2027. A private right of action allows injured persons to recover $1,000 per violation. This law applies to companion AI platforms (character.ai, Replika-style apps), not to general-purpose AI assistants or business chatbots.
Key Facts
January 1, 2026
$1,000 per violation (private right of action); California AG enforcement
What Your Business Must Do
3 compliance requirements identified. Critical requirements carry the highest risk of enforcement action.
AI Companion Identity Disclosure
CriticalIf you operate an AI companion chatbot — an AI system designed to build sustained, human-like companionship or emotional relationships — you must provide a clear and conspicuous disclosure that the user is interacting with AI whenever a reasonable person might believe they are talking to a human. This applies at the start of each interaction session.
Deadline: January 1, 2026
Enhanced Protections for Minor Users
CriticalWhen you know a user is a minor: (1) disclose the AI nature of the chatbot at the start of interaction; (2) send a reminder to take a break every 3 hours of continuous interaction; (3) implement safety protocols preventing generation of content related to suicidal ideation or self-harm; (4) provide links to crisis helplines when self-harm topics arise.
Deadline: January 1, 2026
Crisis Intervention Protocol (Publicly Available)
High PriorityPublish your self-harm and suicidal ideation crisis intervention protocols on your website. The protocol must explain how the AI detects distress signals, what it does when a user expresses self-harm intent (crisis hotline referrals), and how you test and update these safeguards.
Deadline: January 1, 2026
Frequently Asked Questions
Does California SB 243 — AI Companion Chatbot Safety Act apply to my business?
California SB 243 (signed October 13, 2025, effective January 1, 2026) regulates operators of AI companion chatbots — AI systems designed to simulate sustained, human-like companionship, emotional connection, or romantic relationships. Operators must. Use ComplianceIQ's free scanner to get a personalized assessment in under 5 minutes.
What is the penalty for non-compliance?
The maximum penalty under California SB 243 — AI Companion Chatbot Safety Act is: $1,000 per violation (private right of action); California AG enforcement. Fines are typically scaled by company size, severity of violation, and whether violations were willful or accidental.
How do I comply with California SB 243 — AI Companion Chatbot Safety Act?
The 3 requirements above cover the core obligations. The fastest path to compliance is: (1) conduct an AI risk assessment, (2) document your AI systems, (3) implement transparency disclosures where required. ComplianceIQ generates all required documents automatically.
Official Source
https://sd18.senate.ca.gov/news/first-nation-ai-chatbot-safeguards-signed-lawLast updated: 2026-04-13 — verify at source before relying on this information.
Don't leave compliance to chance
ComplianceIQ scans your AI tools, tells you exactly which regulations apply, and generates all required documents — in 30 minutes.
Start your free compliance scan