Skip to main content

Command Palette

Search for a command to run...

A University Just Asked If Your AI Tutoring System Is 'High-Risk' Under the EU AI Act: How to Navigate the Annex III Classification Question

Updated
4 min read

The procurement coordinator at the university sent a questionnaire last Friday. Section 2, question 4: "Does your AI system fall under Annex III of the EU AI Act? If yes, please attach your conformity assessment documentation."

You've never had to answer that question before. And you need to get it right — because if your edtech AI is high-risk, the whole compliance framework changes.

Here's how to think through the classification, and exactly how to answer.

What Annex III Says About Educational AI

Annex III of the EU AI Act lists eight categories of high-risk AI systems. Category 3 is the one that matters for edtech:

"AI systems intended to be used for the purpose of determining access to, or assigning persons to, educational and vocational training institutions, as well as for assessing persons in educational institutions and in vocational training including for the purpose of evaluating students and for assessing the appropriate level of education for an individual."

The operative words: access, assessment, determining.

The Key Classification Questions

Does your AI determine access to educational programmes?

If your system decides who gets admitted to a course, who qualifies for a certification pathway, or who is placed into a particular learning stream — that's high-risk. An AI that ranks student applications for a competitive programme sits firmly in Annex III.

Does your AI assess students?

If your AI grades assignments, scores essays, evaluates performance, or determines whether a student passes or fails — that's likely high-risk assessment under Category 3.

Does your AI determine the "appropriate level of education"?

Adaptive learning systems that place a student into a specific learning track — "this student is at level B2, route them to the advanced track" — may fall here too.

The Grey Zone: AI Tutors and Practice Tools

Most AI tutoring products live in a grey zone. An AI that provides personalised practice exercises, explains concepts differently based on comprehension signals, or adjusts difficulty in real-time is not obviously making access or assessment decisions. It's supporting learning, not determining outcomes.

This is a defensible position — but you need to document it. "We're not high-risk because our AI supports learning rather than determining access or grades" is a legitimate argument, but only if you've actually analysed your system against the Annex III criteria and can show the reasoning.

How to Answer the Classification Question

If your AI is clearly high-risk (grades students, determines course access):

State it directly: "Our system includes functionality that falls under Annex III, Category 3 of the EU AI Act. We are progressing through our conformity assessment and can share our technical documentation, risk management register, and human oversight procedures upon request."

Trying to claim you're not high-risk when your AI grades exams is the wrong strategy. Universities have legal teams. They'll catch it.

If your AI is in the grey zone (tutoring, adaptive practice):

Provide a written classification rationale: "We have analysed our system against the Annex III criteria. Our AI does not determine access to educational institutions or make assessment decisions — it provides adaptive practice exercises and explanatory content. Final assessment and grading decisions remain with educators. We have documented this classification rationale and can provide it on request."

If your AI is clearly not high-risk (content generation, search, Q&A):

State clearly: "Our system does not fall under Annex III of the EU AI Act. It [describe what it does] and does not make decisions related to educational access or assessment of individuals."

What Documentation Universities Are Looking For

Whether high-risk or not, universities will want to see:

  • Your written classification rationale (why you classified your system the way you did)
  • If high-risk: evidence of conformity assessment progress, technical documentation per Article 11, instructions for use per Article 13
  • If not high-risk: documentation of the analysis that led to that conclusion
  • How humans stay in control of decisions affecting individual students — what override mechanisms exist, who has authority to reverse an AI recommendation

The Procurement Reality

Universities are under pressure from their own legal teams and national AI authorities to verify vendor AI classification before deployment. They're not asking this question to be difficult — they need to document that they conducted due diligence.

If you can answer confidently with a written rationale, you move to the next section. If you stumble, the procurement team escalates to the legal team, and the deal slows down by weeks.

A clear written classification rationale, filed somewhere you can retrieve it, is one of the most valuable things an edtech SaaS company can produce right now. It costs almost nothing to write and closes a question that otherwise requires a legal opinion.

Try Complizo free at complizo.com

More from this blog

Complizo

68 posts