A European Hospital Just Sent Your HealthTech Company an AI Act Vendor Questionnaire — Here's Exactly How to Answer It
A healthtech founder I spoke with last week had closed a six-figure deal with a Dutch hospital network in January. In April, the hospital's procurement office sent a renewal packet. Included was a new twelve-page annex titled "AI Act Conformity — Provider Information." Without that annex signed, the renewal was frozen.
Her product is a clinical decision support tool for radiology triage. It had cleared CE marking under the Medical Devices Regulation two years earlier. She assumed that covered her. Her buyer disagreed. Under the EU AI Act, medical device AI sits in a dual regime — MDR plus AI Act — and her hospital's legal team wanted both sets of paperwork before the purchase order moved.
If you sell AI-enabled clinical decision support, diagnostic imaging, patient triage, medication management, or any AI that touches a care decision, your European hospital and ministry customers are about to ask you the same questions. Here is what is coming, and how to answer.
Why HealthTech AI Is Almost Always High-Risk
The EU AI Act classifies AI systems as high-risk in two ways. The route most healthtech founders miss is Article 6(1), not Annex III. Article 6(1) captures AI that functions as a safety component of a product already covered by EU product safety law listed in Annex I. The Medical Devices Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR) are both in that list.
In practice, if your AI product carries a CE mark under MDR class IIa, IIb, or III — or under IVDR class B, C, or D — it is high-risk under the AI Act. The EU Commission's Medical Device Coordination Group has confirmed this interpretation in guidance (MDCG 2025-6, published this year) covering the interplay between MDR and the AI Act.
This is almost every AI-enabled medical device on the market. Radiology AI, pathology AI, ECG interpretation, sepsis prediction, discharge risk scoring, triage software — all high-risk. Hospital procurement teams know this now. That is why the questionnaires appeared.
There is a narrow carve-out for MDR class I self-certified devices and IVDR class A. If your product sits there, you are likely out of Article 6(1) scope but may still be captured by Annex III Category 5(c) if your AI is used to evaluate eligibility for public healthcare benefits.
What Changed in Hospital Procurement This Year
Before 2026, European hospitals evaluated AI medical devices mostly through their MDR paperwork, clinical evidence, and cybersecurity under NIS2. The AI Act adds a new layer. Specifically, the deployer obligations under Article 26 and Article 27 mean the hospital must:
- Verify the AI provider's technical documentation
- Ensure human oversight is implemented on the hospital's side
- Perform a Fundamental Rights Impact Assessment (FRIA) before first use for public-sector deployers
- Keep logs of system output for at least six months
The hospital cannot meet these obligations without data from you. That data now arrives as a vendor questionnaire, typically triggered at purchase, renewal, or material product change.
The Nine Questions Every European Hospital Is Now Sending
After reviewing twelve hospital and national-health-service questionnaires sent to healthtech vendors across Germany, the Netherlands, France, and Ireland in the last ninety days, nine questions repeat almost verbatim.
1. "What is the intended purpose of the AI system, worded to match your MDR intended use?"
Article 13 of the AI Act requires a specific intended purpose statement. Your hospital buyer will check that the AI Act intended purpose and the MDR intended use on your CE certificate match exactly. A one-word drift between the two documents is the most common reason these questionnaires get sent back.
2. "Has a notified body assessed the AI-specific requirements as part of your MDR conformity assessment?"
Under the AI Act, a single conformity assessment is permitted when the notified body is accredited for both MDR and AI Act. Hospitals want to see the certificate covers both. If your CE mark predates AI Act accreditation, they will ask for a gap assessment and a timeline.
3. "Can you provide the Article 11 technical documentation, including Annex IV elements?"
The MDR technical file and the AI Act technical documentation overlap but are not identical. The AI-specific additions include: training data provenance, validation across demographic subgroups, an explanation of how the model was selected over alternatives, and computational resources used during training.
4. "How is training and validation data governed under Article 10?"
Article 10 is the article hospital legal teams read most carefully. They want to know whether the training data represented the patient population they serve. A model trained predominantly on adult male patients performs differently on pediatric or maternal populations. Procurement teams will ask for performance metrics segmented by age, sex, and — where relevant — ethnicity.
5. "What clinical risk management process is in place under Article 9?"
Article 9 risk management cross-references ISO 14971, the medical device risk management standard. Hospitals want to see the overlap explicitly mapped. A bare AI Act risk register without ISO 14971 traceability will fail review at any serious hospital.
6. "How does a clinician override your AI, and how is the override logged under Article 14?"
This is the most emotionally weighted question. Hospital medical directors remember the sepsis prediction studies that reported high false-alarm rates. They want override workflows that do not slow clinicians down, but that produce an audit trail when a clinician rejects an AI suggestion. Article 14 requires effective human oversight; in clinical settings this translates to one click to override, required reason code on disagreement, and a report exported on demand.
7. "How do you detect and report drift and adverse events?"
Article 15 accuracy and robustness combined with MDR vigilance creates a tight obligation. Hospitals want to know: how often the model is revalidated in production, what the drift detection threshold is, how you notify them when drift is detected, and how that notification ties into your MDR vigilance reporting.
8. "How is cybersecurity of the AI component handled in light of NIS2 and MDR cybersecurity requirements?"
Article 15 cybersecurity overlaps with MDR cybersecurity guidance and NIS2 obligations that now apply to many hospitals as essential entities. Hospitals want one mapped answer showing how your controls satisfy all three.
9. "Are you registered in the EU database of high-risk AI systems under Article 49?"
Hospitals will check. Article 49 requires high-risk AI providers to register systems before placing them on the market. Your registration identifier belongs in your response. If registration is pending, include the date of submission.
The FRIA Trap That Slows HealthTech Deals
Public-sector hospital deployers must complete a Fundamental Rights Impact Assessment under Article 27 before first use of a high-risk AI system. They need inputs from you to complete it — specifically, the intended purpose, affected categories of persons, foreseeable risks to fundamental rights, and the human oversight measures.
When you treat the FRIA as the hospital's problem rather than a joint document, procurement slows down. When you arrive with a pre-populated FRIA input template aligned to your Article 11 documentation, most hospitals will accept it and the deal accelerates. Having that template ready — not rewriting it per buyer — is what separates healthtech vendors that close in six weeks from those that close in six months.
Why HealthTech Consistency Fails Most Often
Most healthtech companies I work with have ten or more open hospital procurement processes at once. Each questionnaire is handled by a different mix of clinical affairs, regulatory, and sales engineering. Your first questionnaire answer is written by your regulatory lead. Your ninth is typed by a solutions engineer under deadline pressure.
Two months later, the medical director at Hospital A compares notes with the medical director at Hospital B at a European Society of Radiology meeting. They notice that your answer on bias testing says one thing in Hospital A's file and something different in Hospital B's. Both deals now require a second round of clarifications. In healthcare procurement, that second round almost always triggers a clinical safety committee review, which can delay the purchase order by a quarter.
The fix is not writing better answers. It is writing one verified answer set and making sure that is the only place any team member pulls from.
What to Do Before Your Next Hospital Questionnaire
You have less than four months until enforcement begins biting. Here is the minimum healthtech-specific preparation:
Step 1: Confirm your dual classification. For most MDR class IIa and above, the answer is "high-risk under Article 6(1) via MDR as Annex I legislation." Write that single sentence and anchor every other answer to it.
Step 2: Pull your MDR intended use statement and rewrite your AI Act intended purpose to match it word for word. Fix any drift today.
Step 3: Build the nine-answer baseline above. Cross-reference each one to a specific AI Act article and, where relevant, to ISO 14971, MDR Annex I, or NIS2.
Step 4: Pre-populate your FRIA input template. Hospitals will ask; have it ready.
Step 5: Route every hospital questionnaire through one verified answer set. One version of the truth, on Article 9 risk management, on Article 10 data governance, on Article 14 human oversight — every time.
Complizo does this for healthtech teams. Paste a hospital or national-health-service questionnaire — whether it is a short form from a district hospital or a two-hundred-question national procurement framework — and every AI Act, MDR, ISO 14971, and NIS2 question maps to the same verified answer set you built once.
Try Complizo free at complizo.com
HealthTech is the vertical where the AI Act pressure lands hardest because it sits on top of the strictest product-safety regime in Europe. The healthtech companies that will keep closing hospital deals through August 2026 are the ones who answer MDR and AI Act as one coherent story — and answer it the same way every single time.