Your EdTech Product Is Probably High-Risk Under the EU AI Act — Here's What a School District Will Ask You Next
A founder of an adaptive learning platform sent me the procurement questionnaire from a Dutch school district last week. Forty-two questions. Page three was entirely about the EU AI Act. Question one: "Under which Annex III category is your AI system classified, and what is your conformity assessment status?"
Her product recommended reading levels to students, graded open-ended responses, and flagged students for teacher review. She had been selling into EU K-12 and higher-ed for three years. She had never been asked these questions before. Now she had eight days to respond or the district would move to the next vendor.
If you build edtech that uses AI — adaptive learning, automated grading, admissions scoring, plagiarism detection, or proctoring — this is the questionnaire wave heading your way. Here is what district IT teams, university procurement offices, and ministry of education procurement leads are about to ask you, and how to answer.
Why Education Is a Full Annex III Category
The EU AI Act does not treat edtech as a peripheral concern. Annex III, Category 3 is dedicated to education and vocational training, and the list is broader than most founders expect:
- 3(a) — AI systems intended to determine access, admission, or assignment of natural persons to educational and vocational training institutions at all levels
- 3(b) — AI systems intended to evaluate learning outcomes, including when those outcomes are used to steer the learning process
- 3(c) — AI systems intended to assess the appropriate level of education that an individual will receive or be able to access
- 3(d) — AI systems intended to detect prohibited behavior of students during tests
Read that list carefully. If your product does any of these things — not just formal "assessment," but steering a learner down a particular track, recommending content based on inferred ability, or watching webcams during exams — you fall into Annex III. High-risk.
The "steering the learning process" language in 3(b) is especially broad. Any adaptive learning engine that changes what a student sees next based on their performance is steering the learning process. That is the core feature of most modern edtech. It is also what makes the vast majority of AI-powered edtech high-risk under the Act.
The Procurement Moment Is Different in EdTech
Unlike SaaS buyers in enterprise IT, edtech procurement decisions flow through public-sector buyers in most of the EU. That means three things:
First, their questionnaires are built from public procurement templates that are updated by ministries, not by individual buyers. When the Belgian or French education ministry updates its AI procurement clause, every district using that template sends you the same new questions a month later.
Second, the evaluation is often binary. Either your documentation satisfies the AI Act check or it does not. A procurement officer scoring on a rubric does not have the authority to negotiate you past a documentation gap the way an enterprise CIO might.
Third, the decisions are frequently public record. A rejection quietly documented in a procurement report becomes reference material for other districts. One bad questionnaire can follow you for a year.
The Six Questions You Should Expect
These are the questions I see most often on EU edtech procurement forms in early 2026, mapped to the Articles they derive from.
1. "Is your AI system classified as high-risk under Annex III of the AI Act, and if so, under which subsection?"
They want the specific subsection — 3(a), 3(b), 3(c), or 3(d) — because that drives different documentation. An adaptive learning engine is usually 3(b). An admissions recommendation tool is 3(a). Online proctoring is 3(d). If you do more than one, declare all of them.
2. "Can you provide your Article 11 technical documentation, including Annex IV content?"
Annex IV spells out exactly what needs to be in the documentation: system purpose and context of use, design specifications, training data description, validation and testing procedures, performance metrics, post-market plan per Article 72. A public-sector procurement team will check for each item.
3. "What steps have you taken to ensure your training data does not discriminate against protected groups?"
Article 10 on data governance has teeth in education because training data for learning outcome prediction or admissions scoring tends to encode historical inequities. Procurement officers will ask specifically about performance across:
- Native-language speakers versus non-native speakers
- Students with disabilities and learning differences
- Socioeconomic groups
- Gender
- Age brackets
A general "we test for bias" does not pass. They want testing protocols and subgroup performance metrics.
4. "How do you provide transparency to students, parents, and teachers about AI-generated outputs?"
Article 13 requires transparency to deployers. Article 50 requires disclosure to people interacting with AI systems. In an education context, the people interacting include students (often minors), their parents, and classroom teachers. Your customer wants to see what is communicated, in what language, at what reading level, and where in the product flow.
5. "What human oversight controls does a teacher or administrator have over AI outputs?"
Article 14 applies to edtech in a specific way. Teachers cannot be expected to review every AI-generated recommendation in a class of thirty students. Procurement offices increasingly ask for tiered oversight: routine recommendations the teacher can batch-review, flagged cases that force individual review, and hard blocks for high-stakes outputs like grade assignment or disciplinary flags.
6. "Do you process data of minors, and how does that processing align with Article 5's prohibited practices?"
Article 5 prohibits AI systems that exploit vulnerabilities of specific groups of persons, including those due to their age. Anything targeted at minors gets extra scrutiny. Procurement teams are asking whether your product uses any persuasive or gamification techniques that could be read as exploiting age-related vulnerabilities. This is the question that most edtech founders are least prepared for.
The Overlap With National Education Law
The AI Act does not override national rules on educational assessment, student data, or teacher authority. In practice, this means your procurement response has to answer the AI Act question and show you are not in conflict with the local education code.
A UK-style approach that works fine in England may land badly in Germany, where the Länder have strong views on who can make assessment decisions. The safe response pattern is to answer the AI Act requirement, then explicitly note where national education law adds further constraints you also respect.
Procurement teams notice when vendors understand this nuance. It separates the mature edtech vendors from the startups who are pattern-matching from US-centric questionnaires.
Why Consistency Matters More in Education Than Anywhere Else
EdTech founders tell me this is the single biggest pain point. One district asks about bias testing in March. Six months later, a ministry procurement team asks the same question on a different form. The answer needs to be the same — word for word — or the ministry procurement team will notice the discrepancy, compare notes with the district, and flag you.
Public procurement has long institutional memory. Your answer from 2024 is still in someone's file and will be compared against your answer from 2026. An "I don't remember what we said last time" response is how deals die.
Before the Next RFP Drops
There are fewer than four months until August 2, 2026. The specific steps for edtech:
Step 1: Declare your Annex III subsections. Most edtech products fit 3(b). Admissions tools fit 3(a). Proctoring fits 3(d). If you span multiple, acknowledge all of them.
Step 2: Build an Annex IV documentation pack. Not a marketing deck. Not a security whitepaper. The specific items listed in Annex IV, mapped one to one.
Step 3: Prepare your subgroup performance report. Language proficiency, disability status, socioeconomic proxies, gender. If you do not have this data yet, start collecting it now. Procurement teams will not wait for you to catch up.
Step 4: Write your six answers — to the questions above — and lock them in a single verified source. Every future RFP response should pull from that source. No ad hoc answers in a sales engineer's Google Doc.
Complizo turns this into a repeatable workflow for edtech teams. Paste the district's questionnaire, and every AI Act question maps to the same verified answer you wrote once. Same answer in every RFP, traceable to the specific feature or process it describes.
Try Complizo free at complizo.com
EdTech has always been a careful-buying market. The AI Act gives procurement teams in education a concrete checklist for the first time. The vendors who can answer it cleanly will close; the ones who can't will watch their pipelines compress through the summer of 2026.