Skip to main content

Command Palette

Search for a command to run...

AI Resume Screening Questionnaires: The 5 Questions Every HR Tech Founder Gets Asked (and Exactly How to Answer)

Customers will ask all five. Here is the exact language to send back.

Updated
5 min read

A large European HR team is evaluating your product. Things are going well — until their procurement lead sends a 40-question "AI compliance questionnaire" and asks for answers by end of week.

If you sell hiring or HR software that uses AI — resume parsing, candidate ranking, interview scoring, anything that touches a candidate's pipeline — this is now a normal part of every enterprise deal in the EU. And it's going to keep happening, because under the EU AI Act, AI systems used in employment decisions are classified as high-risk (Annex III). That means every one of your buyers has a legal reason to ask hard questions before they sign.

Here are the five questions you will see in almost every HR tech procurement questionnaire, and the exact framing to use when you answer.

1. "Is your product classified as a high-risk AI system under the EU AI Act?"

This is the gate question. Get it wrong and procurement will stop reading.

If your AI is used to filter, rank, score, or recommend candidates for employment, the honest answer is: yes, it is in scope of Annex III, point 4(a) — "AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates."

Don't try to wriggle out of this. Procurement teams have seen the dodges. Instead, write:

"Our candidate ranking feature is in scope of Annex III of the EU AI Act as a high-risk AI system used in employment. We have identified this classification, documented the specific feature(s) it applies to, and have the controls below in place to meet the high-risk provider obligations by August 2, 2026."

Then actually list the controls. We'll get to that.

2. "Who is the provider and who is the deployer?"

Procurement asks this because the AI Act splits obligations between the provider (the company that places the AI system on the market — usually you, the SaaS vendor) and the deployer (the company using it — usually your customer).

As the provider, you own the bigger list: risk management system, data governance, technical documentation, logging, transparency, human oversight design, accuracy/robustness/cybersecurity, conformity assessment, registration in the EU database, and a post-market monitoring plan.

Your customer, as deployer, owns a narrower list: use the system per instructions, ensure input data relevance, monitor operation, keep logs, inform workers, and run a fundamental rights impact assessment where required.

Answer by naming the split explicitly:

"Complizo Inc. is the provider of the AI system under Article 3(3) of the EU AI Act. [Customer] is the deployer under Article 3(4). Provider obligations are documented in our Evidence Pack; deployer obligations are summarised in our Customer AI Act Guide."

3. "Describe your training data and how you prevent bias."

This is where HR tech founders often blow the answer — either by being vague ("we use diverse data") or by oversharing in ways that invite follow-up pain.

You want to hit four points, briefly:

  • What data categories are used (application text, structured CV fields, historical hiring outcomes — say which)
  • Where the data comes from (customer-provided, synthetic, third-party licensed)
  • What bias-evaluation testing you run (disparate impact analysis against protected characteristics, cadence, last result)
  • How you handle data governance (Article 10: relevance, representativeness, freedom from errors, appropriate statistical properties)

Three to five sentences. Don't write a research paper. Procurement wants to see that you have a process, not a PhD.

4. "How is human oversight implemented?"

Article 14 is explicit: high-risk AI systems must be designed so a human can "oversee their functioning," "intervene or interrupt," and "disregard, override or reverse the output."

For HR tech, this means you have to show that no candidate gets rejected purely by the algorithm. There must be a human decision-maker with the ability to see why a candidate was ranked where they were, and to override it.

Good answer:

"Every ranking or score presented by Complizo to a recruiter shows (a) the score, (b) the top factors contributing to that score, and (c) a one-click override. Final advance/reject decisions are always made by the human recruiter in the customer's workflow; our system does not auto-reject candidates."

Bad answer: "A human is always in the loop." Procurement has read that sentence 500 times. Say what the human sees and what the human can do.

5. "What logs do you retain, and can we access them?"

Article 12 requires automatic event logging for high-risk AI systems, and Article 19 requires providers to keep those logs for at least six months (longer if other laws apply). Deployers often want access to their own subset.

Answer specifically:

"We log every scoring event (timestamp, input hash, model version, score, top feature contributions, override if any). Logs are retained for 12 months. Your organisation's logs are exportable from the Admin panel or via our API, and are available on request for any regulatory inquiry."

Name the retention window, the export path, and the regulator-inquiry workflow. You don't need to show the logs themselves in the questionnaire — just prove they exist.

The pattern under all five answers

Every one of these answers follows the same shape: name the article, state the specific feature it applies to, describe the control in one or two sentences, point to where the evidence lives.

The reason HR tech founders panic at these questionnaires isn't that the answers are hard — it's that each question needs to be anchored to a specific AI feature you ship, and most teams haven't mapped their product that way yet. Mapping "candidate ranking" to Annex III(4)(a) to Article 14 to the override button in your UI takes work, and every answer has to be consistent with every other answer you've sent to every other customer. Procurement teams compare notes.

That's the problem Complizo solves. You define your AI features once, get a risk classification, and turn every customer questionnaire into structured, consistent, ready-to-send answers — with each answer mapped back to the feature that backs it. No more rewriting "how do you handle bias" for the seventh time and hoping the language lines up.

August 2, 2026 is when the high-risk provisions become enforceable. Your customers' procurement teams are already acting like it's live. The HR tech companies that answer cleanly are the ones keeping deals moving.

Try Complizo free — paste your first questionnaire and see what ready-to-send answers look like.

More from this blog

Complizo

17 posts