A Hospital Just Added 14 New Questions to Your Vendor Questionnaire: How to Answer the Article 72 Post-Market Monitoring Section
A Hospital Just Added 14 New Questions to Your Vendor Questionnaire: How to Answer the Article 72 Post-Market Monitoring Section
The update came by email on a Tuesday. Your healthtech company had submitted a vendor questionnaire to a 900-bed academic medical center in the Netherlands three weeks earlier. The procurement lead replied with a PDF of follow-up questions:
"Section 11 requires additional detail on your post-market monitoring plan as required under Article 72 of the EU AI Act. Please describe your proactive data collection methods, your incident reporting procedure, and your serious incident reporting timeline. We also require your technical documentation on this section before contract signature."
Fourteen questions. Section 11 did not exist in the original questionnaire. The hospital's compliance team had apparently been reading the regulation and decided to add it.
You have two weeks to respond.
What Article 72 Actually Requires
Article 72 of the EU AI Act establishes a post-market monitoring obligation for providers of high-risk AI systems. It is one of the more operationally complex articles in the regulation because it requires you to have a system — not just a policy — for collecting and analyzing data about how your AI performs after deployment.
The core obligation:
Providers of high-risk AI systems shall establish and document a post-market monitoring system, and shall actively collect, document, and analyze relevant data provided by deployers and other sources to assess whether the system continues to meet the requirements of this Regulation throughout its lifecycle.
For healthtech companies, "relevant data" typically includes:
- Clinical outcome data (where accessible and contractually agreed)
- User-reported anomalies and errors
- Performance drift indicators (accuracy decline over time)
- Near-miss events involving AI outputs
- Formal serious incident reports
The Three Parts of Section 11
Hospital procurement teams are usually asking about three distinct but related requirements:
Part 1: Proactive Data Collection Methods
Article 72(3) requires the monitoring plan to include "a proactive data collection and analysis method." This means you need more than a passive bug report inbox.
Your answer should describe:
What data you collect: Model performance metrics (accuracy, sensitivity, specificity if applicable), user correction events (cases where a clinician overrode or flagged the AI output), system errors, and latency metrics that might indicate degraded performance.
How you collect it: Automated telemetry within the product, structured feedback prompts after clinical use events, periodic accuracy evaluations against validation datasets, and — where contractually permitted — outcome data correlation.
How frequently you analyze it: Monthly review cadence is typical for post-deployment monitoring. Quarterly formal reports against your original validation benchmarks is a common structure. Any anomaly above a defined threshold should trigger an out-of-cycle review.
A strong answer is specific about cadence and thresholds, not vague about "ongoing monitoring."
Part 2: Incident Reporting Procedure
The EU AI Act distinguishes between incidents (which you manage internally) and serious incidents (which trigger regulatory reporting obligations).
Your internal incident procedure should describe:
- How users report anomalies (in-product reporting mechanism, dedicated email, named responsible person)
- How reports are triaged (who reviews, within what timeframe)
- What constitutes a reportable finding vs. a product improvement note
- How findings feed back into the risk management system under Article 9
Hospitals want to know that if a nurse spots the AI output something clinically anomalous, there is a clear path for that observation to reach your team and be acted on.
Part 3: Serious Incident Reporting Timeline
Article 73 requires providers to report serious incidents to market surveillance authorities. A serious incident under Article 3(49) means: "any incident or malfunction of a high-risk AI system that directly or indirectly led to, or could have led to, the death of a person or serious damage to a person's health or property, or serious and irreversible disruption of the management and operation of critical infrastructure."
For clinical AI tools — even those with human oversight built in — this is not a theoretical risk.
The reporting timelines under Article 73:
- Immediate notification (within 24 hours): incidents that represent a serious risk to health or safety of persons or a serious and immediate risk to public safety
- 15 days: all other serious incidents
- Follow-up report: within a timeframe agreed with the market surveillance authority, not to exceed 3 months from the initial notification
Your answer should name the responsible function (typically your clinical safety officer or regulatory affairs team), the trigger conditions, and the notification chain.
What "Technical Documentation on This Section" Means
The hospital is asking for the Article 11 + Annex IV technical documentation section that covers your post-market monitoring plan. Annex IV, point 9 specifically requires:
"A description of the post-market monitoring plan, referred to in Article 72."
This is a written document, not a verbal assurance. It should describe your monitoring system architecture, data collection methods, incident procedures, reporting timelines, and the responsible persons by role.
If you do not have a formal written post-market monitoring plan, this is the right moment to draft one. The plan does not need to be long — 4-6 pages covering the above elements is sufficient for most clinical AI tools at the current stage of enforcement.
The Deeper Question
Hospital procurement teams adding Article 72 questions mid-review are not trying to create extra work. They are signaling that their legal and clinical safety teams have reviewed the regulation and identified a requirement they cannot waive.
The question behind Section 11 is: "If your AI makes a clinical error after we deploy it, do you have a system for finding out — and do we have a process for reporting it?"
That is a reasonable question for a hospital to ask of any clinical AI vendor. The EU AI Act gave it formal structure. The hospitals are now enforcing it in procurement.
Try Complizo free at complizo.com — paste your post-market monitoring questions and get answers mapped to your actual technical setup.