Skip to main content

Command Palette

Search for a command to run...

A Hospital Group Just Asked What You Do When Your AI Diagnostic Tool Stops Meeting Accuracy Benchmarks Mid-Contract: Answering the Article 21 Corrective Actions Section

Updated
4 min read

A Hospital Group Just Asked What You Do When Your AI Diagnostic Tool Stops Meeting Accuracy Benchmarks Mid-Contract: Answering the Article 21 Corrective Actions Section

The vendor review came eighteen months into the contract. A regional hospital group in Belgium had been using your AI-assisted radiology triage tool across three sites. The annual clinical technology review included a new section this year:

"Please describe your corrective action procedure for cases where your AI system's performance falls below the accuracy and reliability thresholds stated in your technical documentation. Under EU AI Act Article 21, what obligations apply to your company when a non-conformity is identified post-deployment? Provide documentation of any corrective actions taken in the past 12 months."

The question is well-formed. The procurement team has read the regulation.

Article 21 is one of the less-discussed articles in the EU AI Act — but it creates real operational obligations for clinical AI vendors. Here is what it requires and how to answer.

What Article 21 Requires

Article 21 of the EU AI Act establishes that providers of high-risk AI systems must take corrective actions when their system does not conform with the requirements of the regulation. The article reads:

"Providers of high-risk AI systems which consider or have reason to consider that their system is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, or to recall it, as appropriate."

The key phrases: immediately, corrective actions, withdraw or recall.

This is not a theoretical obligation. It is the regulatory equivalent of a product recall framework — applied to AI systems. If your model's accuracy declines below the thresholds you stated in your technical documentation, or if you identify a failure mode that was not present at deployment, Article 21 is triggered.

For clinical AI tools, this has practical implications because performance drift is common: a radiology model trained on data from 2023 may perform differently on a 2026 patient population, different imaging equipment, or different clinical workflows than those present in validation.

The Three Things Article 21 Requires You to Have

First: A conformity threshold. Your technical documentation must state the accuracy, sensitivity, and specificity levels your system was validated at. Article 21 is only actionable if you have a clear benchmark to measure against. If your documentation states "the system achieves 94% sensitivity on the validation dataset," then that is the threshold. Falling materially below it triggers the article.

If your documentation does not state a specific threshold, that is itself a documentation gap — and the hospital's question is surfacing it.

Second: A monitoring trigger. Article 21 requires immediate corrective action. You cannot take immediate action on a problem you have not detected. Your post-market monitoring plan (required under Article 72) should include performance monitoring at defined intervals, with thresholds that trigger an Article 21 review. A typical structure: monthly automated performance checks against validation benchmarks, with a defined percentage decline (for example, more than 3 percentage points below the stated threshold) triggering a formal internal review within five business days.

Third: A documented corrective action procedure. When a threshold is breached, your procedure should specify: who is notified internally (clinical safety officer, CTO, regulatory affairs lead), what the investigation protocol is, what the decision criteria are for each option — remediation, withdrawal, or recall — and what you communicate to affected deployers (the hospitals) and when.

How to Answer the Hospital's Question

Structure your response around the three elements the procurement team is actually asking about:

Your conformity thresholds: Attach the relevant section of your technical documentation that states performance benchmarks. If you validated on a dataset representative of the hospital's patient population, note that.

Your corrective action trigger: Describe your post-market monitoring cadence and the specific threshold that initiates an Article 21 review. "We conduct monthly performance evaluations and initiate a formal corrective action review if observed sensitivity falls more than X percentage points below our stated threshold" is a concrete, defensible answer.

Your corrective action options and communication protocol: Describe what happens when a review is triggered. What are the possible outcomes? Who at the hospital is notified, in what timeframe, and through what channel? If you have issued any performance advisories or corrective actions in the past 12 months, describe them (without breaching confidentiality obligations to other customers).

The Question Behind the Question

Hospital procurement teams adding Article 21 to annual reviews are not looking for reassurance that your AI is perfect. They are looking for evidence that you have a system for finding out when it is not — and that you have a plan for what to do next.

A clinical AI vendor with no corrective action procedure is a liability. The EU AI Act gave hospitals a framework to require one. The ones asking Article 21 questions are the ones taking that framework seriously.

The vendors who answer this section clearly — benchmarks, triggers, procedure, communication — close renewals faster and face fewer escalations when performance issues do arise.

Try Complizo free at complizo.com

More from this blog

Complizo

68 posts