A Belgian University Just Asked Whether Your AI Exam Tool Counts as 'Biometric Categorisation': How to Answer the EU AI Act Classification Question for Online Proctoring
The email from the KU Leuven procurement office arrived on a Thursday afternoon.
Your AI-assisted online exam supervision tool was three weeks from contract signature — a €78K ARR deal covering 12,000 students across three faculties. The procurement officer had one question before the legal sign-off: "Please clarify whether your system uses facial analysis, gaze estimation, or any other biometric inference for the purpose of categorising students during exam sessions, and whether this classifies your product as a 'real-time remote biometric identification system' or 'biometric categorisation system' under the EU AI Act."
Your legal contact was on holiday. Your CTO had never had to answer this question in writing. The contract was contingent on the answer.
This post walks through exactly how edtech CTOs should answer the proctoring classification question.
Why This Question Is Hard
The EU AI Act contains some of its sharpest restrictions around biometric systems. Article 3(36) defines a "biometric categorisation system" as an AI system that categorises natural persons based on their biometric data according to specific sensitive attributes including race, ethnicity, political opinions, religious beliefs, trade union membership, sexual orientation, or health status.
Separately, Annex II (prohibited AI practices) under Article 5(1)(d) bans AI systems that use subliminal, manipulative, or deceptive techniques to influence persons — and Article 5(1)(a) prohibits AI systems that exploit subconscious behaviours.
Separately again, real-time remote biometric identification systems used in publicly accessible spaces face the strictest restrictions in the Act.
Online proctoring tools touch biometrics — but almost none of them are biometric identification or biometric categorisation systems in the legally relevant sense. The problem is that procurement teams at universities don't always know the distinctions. They see "facial analysis" in your product description and send the worst-case question.
Your answer needs to do two things: accurately classify what your system does, and explain why it does not fall into the prohibited or maximally restricted categories.
The Three Classification Questions and How to Answer Them
1. "Is your system a real-time remote biometric identification system?"
What the law says: Article 3(40) defines a "real-time remote biometric identification system" as an AI system used for identifying natural persons at a distance without prior knowledge, comparing biometric data to a database. The August 2026 implementation date for this category is the strictest in the Act.
How to answer: Almost no proctoring tool is a real-time remote biometric identification system under this definition, because identification requires a search against a biometric database of enrolled individuals. If your tool uses facial analysis to verify that the same person who started the exam is still in front of the camera — not to identify an unknown person from a database — it is a one-to-one biometric verification system, not a one-to-many biometric identification system.
State this explicitly: "Our system performs biometric verification (one-to-one matching between live camera feed and the enrolled student photo provided at registration), not biometric identification (one-to-many search against an unrelated database). It does not meet the definition of a real-time remote biometric identification system under Article 3(40) of the EU AI Act."
2. "Is your system a biometric categorisation system?"
What the law says: Article 3(36) limits "biometric categorisation" to classification of persons according to sensitive attributes (race, ethnicity, health, political opinion, etc.) based on biometric data. Annex III, Category 1(b) lists biometric categorisation systems as high-risk.
How to answer: Proctoring tools that use gaze estimation or head pose detection to flag potential rule violations are not categorising students by sensitive attributes — they are detecting behavioural anomalies (looking away from screen, presence of a second face in frame, audio events). These are behavioural signals, not biometric categories under Article 3(36).
Be specific: "Our system uses gaze and head pose estimation to detect behavioral anomalies during an exam session — specifically, extended off-screen eye contact, head rotation beyond a defined threshold, or the presence of additional faces in the camera frame. This constitutes behavioral detection, not biometric categorisation of persons by sensitive attributes. The system does not infer ethnicity, health status, political opinion, or any other protected characteristic from biometric data."
If your system does perform any sentiment or emotion inference (detecting nervousness, distress, or other affective states), this is a significantly harder question. Annex III, Category 1(a) formerly listed "emotion recognition systems" as high-risk, and while the final text modified this, emotion inference in exam contexts remains a regulatory red flag. If you use emotion recognition, be transparent about it and explain the specific constraints on how outputs are used.
3. "Does the system fall under Annex III as a high-risk AI system?"
What the law says: Annex III, Category 3 covers AI systems used for educational and vocational training purposes — specifically "determining access or admission, or assigning persons to educational and vocational training institutions or programmes" and "assessing learning outcomes and influencing the level of education and training that students will receive."
How to answer: This is where the classification is genuinely contested for proctoring tools. A proctoring tool that generates a report recommending exam invalidation — which then affects whether a student passes a course — is arguably within Annex III Category 3 scope. The key question is whether your tool makes or substantially influences the educational outcome decision, or whether it is purely an input to a mandatory human review process.
Answer precisely: "Our system produces an anomaly report at the end of each exam session. This report is reviewed by an authorised human examiner before any action is taken. Our system does not directly affect student grades or academic status. All decisions to invalidate an exam, issue a warning, or take disciplinary action require human review and approval by the institution. The report is an input to that process, not a decision."
If this human review step is enforced in your product — meaning the platform cannot flag an exam as invalidated without a logged human confirmation — say so. That workflow is what distinguishes a high-risk AI system with adequate human oversight (Article 14 compliant) from one that automates consequential decisions.
What to Send With Your Answer
Procurement teams at universities with active data protection officers have read enough AI Act summaries to know when an answer is circular. To move the deal forward, pair your classification answer with two supporting documents:
A classification rationale memo (one page) that states your product category under each relevant Annex III entry, explains why each prohibited practice under Article 5 does not apply, and notes the Article 14 human oversight workflow. This is not a legal opinion — it is a technical description framed against the statutory definitions.
Your human oversight workflow description — a process diagram or step-by-step description showing what happens between your anomaly report output and any academic action. Show where the human reviewer sits, what data they review, and what confirmation step is logged. This directly answers the Annex III Category 3 concern.
These two documents transform a three-sentence email answer into a substantive response that a university legal department can sign off without demanding a 30-minute call.
The Underlying Rule for Any Classification Question
When a procurement team asks whether your product is "X type of AI system under the EU AI Act," they are usually asking one of two underlying questions: (1) Is this product legal to use at our institution, and (2) What additional compliance steps will we have to take if we buy this?
Your answer should directly address both. Tell them what your product is, tell them what it is not, and tell them what the compliance workflow looks like when they deploy it. Uncertainty is costlier to a procurement officer than a high-risk classification with a clear compliance path.
Try Complizo free at complizo.com