Skip to main content

Command Palette

Search for a command to run...

Is Your Hiring Software High-Risk Under the EU AI Act? Here's How to Find Out Before Your Customer Does

Updated
6 min read

Last week a founder pinged me in a panic. Their biggest enterprise customer had just sent over a 40-question procurement questionnaire. Question number one: "Is your AI system classified as high-risk under the EU AI Act?"

They had no idea how to answer.

If you build hiring software that uses AI — for candidate screening, resume parsing, interview scoring, or job-ad targeting — there is a very good chance the answer is yes. And your customers are going to need proof, not guesses.

Here is how to figure out your risk classification before the next questionnaire lands in your inbox.

Why HR Tech Gets Special Treatment Under the AI Act

The EU AI Act does not treat all AI the same. It uses a tiered risk system: unacceptable, high-risk, limited, and minimal. Most SaaS products fall into limited or minimal risk. HR and hiring technology is different.

Annex III of the AI Act explicitly lists "AI systems intended to be used for recruitment or selection of natural persons, notably to place targeted job advertisements, to screen or filter applications, and to evaluate candidates" as high-risk. That is Category 4, Area 4 — employment, workers management, and access to self-employment.

This is not a gray area. If your product uses AI to influence who gets hired, who gets interviewed, or who sees a job posting, the EU has already classified you.

The Three Questions That Determine Your Classification

Before you can answer your customer's questionnaire, you need to answer three questions yourself.

1. Does your product use AI as defined by the Act?

The EU AI Act defines an AI system broadly: a machine-based system that infers from inputs to generate outputs like predictions, recommendations, or decisions. If your product uses machine learning models, large language models, or statistical inference to process candidate data, the answer is almost certainly yes.

Simple keyword matching or rule-based filters probably do not qualify. But the moment you add a model that learns from data — even a basic ranking algorithm — you cross the line.

2. Does your AI system operate in an Annex III domain?

For hiring software, this is straightforward. Annex III, Category 4 covers:

  • Placing targeted job advertisements
  • Screening or filtering applications
  • Evaluating candidates in recruitment, promotion, or termination decisions
  • Monitoring or evaluating worker performance and behavior

If your product touches any of these use cases, you are in Annex III territory.

3. Does the "significant harm" exception apply?

Article 6(3) of the AI Act allows providers to argue their system does not pose a significant risk of harm despite falling into an Annex III category. However, this exception is narrow. You must document your reasoning, notify the relevant authority, and the system cannot make profiling decisions about natural persons.

For hiring software, this exception is almost never viable. Hiring decisions directly affect people's livelihoods. Regulators will scrutinize any attempt to self-exempt, and your enterprise customers will not accept "we decided we are not high-risk" as an answer on their questionnaire.

What High-Risk Classification Actually Requires

Once you know you are high-risk, the next question your customer will ask is: "What are you doing about it?" Here is what the AI Act requires of high-risk system providers under Articles 8 through 15:

Risk management system (Article 9). You need a documented, ongoing process to identify and mitigate risks throughout your AI system's lifecycle. This is not a one-time audit.

Data governance (Article 10). Training, validation, and testing datasets must meet quality criteria. For hiring software, this means you need to demonstrate your models were not trained on biased data that discriminates by gender, ethnicity, age, or disability.

Technical documentation (Article 11). A detailed description of your system — its purpose, how it works, what data it uses, how it was tested, and what its known limitations are. This is typically what procurement teams are asking for in their questionnaires.

Record-keeping (Article 12). Automatic logging of system operations so that your AI's decisions can be traced and audited.

Transparency (Article 13). Deployers — your customers — must be able to understand your system's output and use it appropriately. This means clear documentation, not a 200-page PDF that no one reads.

Human oversight (Article 14). Your system must allow meaningful human review of its outputs. A "rubber stamp" workflow where a human clicks approve on every recommendation does not count.

Accuracy, robustness, and cybersecurity (Article 15). You need to demonstrate your system performs as documented and is resistant to adversarial manipulation.

How This Shows Up on Procurement Questionnaires

Enterprise customers are not asking about these requirements in the abstract. They are translating them into specific questions on their procurement forms. Here are the ones HR tech founders see most often:

  • "Is your AI system classified as high-risk under the EU AI Act? If so, under which Annex III category?"
  • "Can you provide your Article 11 technical documentation?"
  • "How do you ensure your training data does not introduce bias in candidate screening?"
  • "What human oversight mechanisms are built into your system?"
  • "How do you test for accuracy and robustness of your AI outputs?"

Each of these maps directly to a specific Article. If you know your classification and have your documentation organized, answering them is systematic, not stressful.

The Real Problem: Answering Consistently Across Every Deal

Knowing your risk classification is step one. The harder problem is answering these questions the same way every time.

When your head of sales answers the questionnaire for Customer A in March, and your CTO answers it for Customer B in June, the answers need to match. One inconsistency — different descriptions of your risk management process, different claims about human oversight — and you have a credibility problem that can kill a deal.

This is why founders are moving away from ad-hoc questionnaire responses. You need a single source of truth that maps every question to a verified answer, and keeps those answers consistent regardless of who on your team is filling out the form.

What to Do This Week

You do not need to wait until August 2, 2026 — the enforcement deadline — to get this right. Your customers are sending questionnaires now.

Step 1: Determine your Annex III classification. If your product uses AI in hiring, screening, or candidate evaluation, you are almost certainly high-risk under Category 4.

Step 2: Map your existing documentation to Articles 9 through 15. Identify the gaps. Most early-stage HR tech companies have partial coverage at best.

Step 3: Build a questionnaire answer set. Take the five questions above, write clear answers, and make them your baseline. Every future questionnaire response should start from this set.

Step 4: Make those answers accessible to everyone who touches procurement. Sales, legal, CTO — they all need to give the same answer.

Complizo turns this into a 10-minute workflow. Paste your customer's questionnaire, and Complizo maps each question to the right answer from your verified answer set. Same answer every time, traceable to the specific AI feature or process it describes.

Try Complizo free — paste your first questionnaire


The EU AI Act's high-risk obligations for HR tech are not optional and not distant. Your customers are asking today. The founders who can answer clearly and consistently are the ones closing deals.

More from this blog

Complizo

17 posts