How to Answer the "Describe Your AI System" Section of an EU AI Act Questionnaire (Template Included)
A reusable template for the highest-leverage question in any procurement questionnaire.
You open the procurement questionnaire. Question 1: "Please describe the AI system you are providing, including its intended purpose, inputs, outputs, and any third-party models used."
Most founders have one of two reactions. They freeze — because writing a precise description of an AI system from scratch, with regulator-grade language, is not what they planned to do today. Or they overshare — pasting in an engineering-blog-level explanation of their architecture that immediately raises five follow-up questions.
Both reactions cost deals. Here's a cleaner way to handle this section, plus the exact template language that buyers' compliance teams actually want to see.
Why this question matters more than the rest
The "describe your AI system" question looks like a warm-up. It isn't. It's the anchor that determines how every other answer in the questionnaire is interpreted.
If your description is fuzzy, every downstream answer becomes fuzzy. "What's your risk classification?" means nothing if the system it applies to was never nailed down. "How do you handle human oversight?" becomes un-verifiable. If your description is too broad (entire platform) or too narrow (one model), the rest of the questionnaire is either too scary to sign off on or too small to be credible.
EU AI Act Article 11 and Annex IV require providers of high-risk AI systems to maintain technical documentation that, among other things, describes the intended purpose, the persons or groups affected, the inputs and outputs, and the general logic. Procurement teams know this. When they read your Question 1 answer, they're really asking: "Have you actually written your Annex IV documentation yet?"
The template
Use this exact structure. Five short paragraphs, each with a fixed job.
1. Name and intended purpose (2–3 sentences).
"[Feature name] is the AI system under this questionnaire. Its intended purpose is to [one-sentence task description], in the context of [one-sentence deployment context]. It is used by [who operates it] as part of [which workflow]."
Example: "Candidate Ranking is the AI system under this questionnaire. Its intended purpose is to score and rank inbound job applicants against an open role, in the context of our HR SaaS product used by EU-based employers. It is used by in-house recruiters as part of the pre-screen step of the hiring workflow."
2. Inputs (1 short paragraph, list-form acceptable).
"Inputs include: [data category 1], [data category 2], [data category 3]. Inputs are provided by [source]. Personal data categories are limited to [specific categories], processed under [lawful basis] per the customer's DPA."
Be concrete. "Structured CV fields (name, work history, education) and free-text cover letter, provided by the candidate via the customer's application form." Much better than "application data."
3. Outputs (1 short paragraph).
"Outputs are [type of output]: [specific format]. Outputs are delivered to [who]. They are [used as / never used as] a basis for automated decision-making; final decisions are made by [who]."
Example: "Outputs are a numeric score (0–100) and an ordered ranking, with the top three contributing factors for each score. Outputs are delivered to the customer's recruiter. They are never used as the sole basis for rejecting a candidate; final hiring decisions are made by the recruiter."
4. Models and third parties (1 paragraph).
"The system is built on [base model(s) and version(s)], [hosted/self-hosted]. [If fine-tuned]: we fine-tune on [data source] using [method]. [If any third-party AI service]: [provider, service, purpose]. No training uses customer-identifying data without explicit written consent."
Name the base model. Name the version. Name the cloud region if it matters. This is where founders over-explain architecture; don't. Procurement doesn't want a diagram, they want to know what's inside the black box well enough to tick a box.
5. Classification and scope statement (2 sentences).
"Under the EU AI Act, we classify this system as [high-risk per Annex III point X / limited-risk subject to transparency obligations / minimal-risk / out of scope], because [one-line reason]. The system is [or is not] subject to the high-risk provider obligations that become enforceable on August 2, 2026."
Don't hide the classification at the end of the questionnaire. Say it in Question 1 and the rest of the answers flow naturally.
What not to write
A few traps that drag these answers into the bin.
Marketing language. "Our cutting-edge AI empowers recruiters with next-generation insights." Procurement sees this and sighs. Cut every adjective that wouldn't survive a legal review.
Vague scope. "Our platform uses AI in many places." If you say this, you've just told the customer's compliance team that every feature you ship is in scope — and that they need to treat the whole platform as a high-risk system. That's not what you want.
Overbroad claims of safety. "The system cannot produce biased outcomes." No AI system's provider can claim this under Article 15. Replace with: "We evaluate the system for disparate impact on [groups] on a [cadence]; most recent evaluation result: [summary]."
Unnamed base models. "We use a large language model." Name it. "We use Anthropic Claude Sonnet 4.5 via the API, with no fine-tuning" is a sentence a compliance team can work with.
The consistency problem
The dirty secret about the "describe your AI system" question is that the answer you give on Monday's deal has to match the answer you give on Friday's deal, and the next quarter's deal, and the one after. If the wording drifts, procurement teams notice. Law firms definitely notice. A change in how you describe your own system between two customers can read like a change in scope, which reads like a change in risk.
This is why the best practice isn't to write this answer into each questionnaire from scratch. It's to write it once, maintain it as part of your AI feature registry, and pull it in verbatim every time. The first time takes work. Every time after, it's copy-and-confirm.
That's the workflow Complizo is built around: define your AI features once, get a classification per feature, and have the "describe your system" answer — and every downstream answer — generated consistently and mapped back to the feature that backs it. You still own the words. You just don't rewrite them at 11pm the night before the procurement deadline.
The next questionnaire is going to hit your inbox this week or next. The answer to Question 1 is the single most-leveraged piece of writing in your sales cycle right now.
Try Complizo free — paste your first questionnaire and let the answers come out ready-to-send.