AI Compliance for Education
AI in learning environments: student data, assessment AI, and high-risk classification
Education AI that makes decisions affecting access to education is classified as high-risk under the EU AI Act. Student data is subject to FERPA in the US and GDPR's special protections for children in the EU. AI-based assessment, admissions, and monitoring tools face significant scrutiny.
Applicable regulations
EU AI Act — High-Risk Education AI
Critical RiskScope: AI used in access to educational institutions or assessment in EU
Full conformity assessment, technical documentation, bias testing for protected characteristics, human oversight, EU AI database registration
Deadline: August 2, 2026
FERPA — Family Educational Rights and Privacy Act
High RiskScope: Educational institutions receiving federal funding in the US
Student consent or school official exception for AI vendors accessing educational records, vendor contracts as "school officials," data sharing restrictions
Deadline: Ongoing
GDPR — Children's Data (Article 8)
High RiskScope: Processing data of children under 16 in EU (national variation 13–16)
Parental consent for processing children's data, no profiling of children for advertising, enhanced protections, DPIA required
Deadline: Ongoing
COPPA (US)
High RiskScope: EdTech serving children under 13 in the US
Verifiable parental consent before collecting data, data minimization, no behavioral advertising, parental access and deletion rights
Deadline: Ongoing
State Student Data Privacy Laws
Medium RiskScope: EdTech vendors operating in California, New York, Colorado, and 15+ other states
SOPIPA (CA), NY Ed Law 2-d, and equivalent state laws prohibit targeted advertising using student data, require data security, restrict data sharing
Deadline: Varies by state
What to do first
Classify all AI tools used in admissions, assessment, or credential decisions as high-risk and plan conformity assessment
FERPA compliance for all EdTech vendors: sign FERPA-compliant data processing agreements
No behavioral profiling of children under 16 (EU) or 13 (US) without explicit parental consent
AI-generated assessments: implement human review for any grading or credential decision
Transparency to students and parents about when AI is used in decisions affecting them
Proctoring AI: particularly high scrutiny for bias — test across demographic groups before deployment
Estimated compliance cost
$20,000–$80,000 initial + $8,000–$20,000/year ongoing
Proactive compliance typically costs 3–5× less than post-enforcement remediation.
Generate your education AI compliance plan
ComplianceIQ maps your specific AI systems against all applicable regulations for education — and generates prioritized documentation across 108+ jurisdictions.