🇨🇦 CanadaAIDAPIPEDA 11 min read

AI Compliance in Canada: AIDA, PIPEDA, and Provincial Laws 2026

Canada is building one of the most comprehensive AI regulatory frameworks outside the EU. The federal Artificial Intelligence and Data Act (AIDA) is advancing through Parliament. Quebec Law 25 is in full force and has requirements that exceed PIPEDA. Federal regulators in finance and insurance have issued AI-specific guidance. Here is what Canadian AI compliance looks like in 2026.

Updated April 2026 · by ComplianceIQ Editorial

Canadian AI Regulatory Landscape 2026

PIPEDA (CPPA)
Federal — private sector
In force
Replacement legislation (Bill C-27 Part 1) was not enacted before 2026 election
AIDA (Artificial Intelligence and Data Act)
Federal — high-impact AI
Advancing through Parliament
Part of Bill C-27; expected enactment 2026-2027; key provisions already known
Quebec Law 25 (Law 64)
Quebec — all sectors
Fully in force since Sept 2023
Strictest data law in Canada; includes automated decision-making rights
Ontario Bill 194 (Strengthening Cyber Security and Building Trust in the Public Sector Act)
Ontario public sector AI
Passed 2024
Transparency requirements for AI in Ontario government and broader public sector
OSFI Guideline E-23
Federally regulated financial institutions
In force
Model risk management including AI/ML models; financial institutions must comply

AIDA: Canada's Artificial Intelligence and Data Act

The Artificial Intelligence and Data Act (AIDA) is Part 3 of Bill C-27, introduced in Parliament in June 2022. It creates Canada's first federal framework specifically for AI regulation, using a risk-based approach similar to the EU AI Act.

Legislative status as of April 2026

Bill C-27 was before Parliament but had not been enacted as of April 2026. Canada held a federal election, creating uncertainty about the bill's timeline. However, the key provisions of AIDA are widely expected to be enacted in some form — businesses should design systems now that would comply with AIDA's requirements.

AIDA key provisions

High-impact AI systems

AIDA creates a category of "high-impact AI systems" — defined by regulation, but expected to include AI used in consequential decisions about individuals (employment, credit, benefits, healthcare), AI with broad reach, and AI in critical infrastructure. Companies that develop or deploy high-impact AI systems have specific obligations.

Obligations for developers and deployers

For high-impact AI systems: assess and mitigate risks (before and during deployment), monitor for risks on an ongoing basis, keep records of the assessment and mitigation measures, implement human oversight where risks cannot be mitigated, notify the designated Minister if the system is producing "material harm." Develop an AI governance program.

Transparency requirements

Make publicly available a plain-language description of each high-impact AI system, including its intended purpose, the types of decisions it makes, and the measures taken to mitigate risks. This is somewhat less prescriptive than the EU AI Act's technical documentation requirements.

Prohibited conduct

AIDA prohibits: using AI in a reckless manner that causes serious harm to individuals, making AI available when knowing it will be used for fraud, and certain AI-enabled data processing (links to CPPA provisions). Criminal penalties apply to prohibited conduct.

Enforcement

A new AI and Data Commissioner would be appointed to enforce AIDA. Administrative penalties up to $10M or 3% of global revenue. Criminal penalties for reckless harm-causing AI: up to $25M or 5% of global revenue. Significant reputational risk from public disclosure of violations.

PIPEDA and AI: Current Obligations

The Personal Information Protection and Electronic Documents Act (PIPEDA) is Canada's current federal private-sector data protection law. It applies to any organization that collects, uses, or discloses personal information in the course of commercial activity. PIPEDA was not written for AI, but its principles apply directly to AI systems.

Accountability
Someone in your organization is responsible for PIPEDA compliance for each AI system. You need an AI system owner who can answer questions about data use.
Limiting collection
AI systems may only collect the personal information necessary for the stated purpose. Training data collected for one purpose cannot be used to train an AI for a different purpose without re-consent.
Consent
Most personal data used in AI requires consent. For sensitive uses (profiling, monitoring), meaningful consent is required — not buried in terms of service.
Individual access
Individuals can request access to their personal information held in AI systems — including what data exists, how it was used, and to whom it was disclosed.
Automated decision-making (OPC guidance)
The Office of the Privacy Commissioner (OPC) has issued guidance that individuals have the right to know when AI makes decisions about them, to receive a meaningful explanation of how the decision was made, and to have an error corrected.

OPC enforcement: The Office of the Privacy Commissioner has increased AI-related investigations. OPC investigated Clearview AI (facial recognition scraping) and found it violated PIPEDA. OPC investigated Tim Hortons' app for tracking location without meaningful consent. AI systems that profile individuals or make automated decisions about them are a growing enforcement priority.

Quebec Law 25: Canada's Strictest Data Law

Quebec's Law 25 (Loi modernisant des dispositions législatives en matière de protection des renseignements personnels), fully in force since September 2023, goes significantly further than PIPEDA. For companies with customers or employees in Quebec, Law 25 creates the most demanding Canadian compliance requirements — including provisions that look more like GDPR than PIPEDA.

Automated decision-making disclosure (Article 12)

Article 12 — Key AI Obligation

When a decision is based exclusively on automated processing of personal information and produces legal or significant effects, individuals must be notified. They have the right to: know what personal information was used, request human review of the decision, and submit observations. This is Canada's clearest analog to GDPR Article 22.

Privacy Impact Assessments (PIAs) mandatory

Mandatory before launch

Any project involving the collection, use, communication, or disclosure of personal information, or any IT system that processes personal information, must have a Privacy Impact Assessment conducted before launch. This applies directly to AI system deployments.

Data minimization and purpose limitation

Stricter than PIPEDA: personal information may only be collected for the explicit purpose stated in the privacy notice. Using data collected for one purpose in a new AI system requires a new PIA and updated privacy notice.

Breach notification

Breaches involving personal information must be reported to the Commission d'accès à l'information (CAI) and affected individuals "without delay" when there is a risk of serious harm.

What to do now

Complete a PIPEDA compliance review for all AI systems that process Canadian personal data. The OPC's guidance on automated decision-making provides the specific framework.
If you have Quebec customers or employees: conduct mandatory Privacy Impact Assessments (PIAs) for all AI systems before deployment. Implement Article 12 disclosure and human review mechanisms.
Inventory your high-impact AI systems against AIDA's expected scope. Even before enactment, use AIDA's framework for risk assessment and governance.
For federally regulated financial institutions: ensure OSFI E-23 model risk management compliance applies to AI/ML models.
Appoint a Privacy Officer with authority over AI system reviews. Both PIPEDA and Law 25 require an accountable individual.
Prepare automated decision-making notices in both English and French for Quebec users.
Review data processing agreements with any AI vendors processing Canadian personal data.

Check your Canadian AI compliance status

ComplianceIQ covers PIPEDA, Quebec Law 25, and AIDA. Get your free risk report and see exactly which requirements apply to your AI systems.

Get my free risk report

Related reading