When Your Customer Asks About Bias in Your AI Hiring Tool: How to Answer the Hardest Compliance Questions
When Your Customer Asks About Bias in Your AI Hiring Tool: How to Answer the Hardest Compliance Questions
A European bank asked a hiring software company a question last month that stopped the deal cold.
Not a general question about the EU AI Act. A specific one: "Describe the steps you have taken to test your AI system for potential discriminatory effects against protected characteristics including gender, ethnicity, and age."
The founder knew their product worked well. They had customers. Low churn. But no one had ever made them put their bias testing process in writing before. And now, with a seven-figure enterprise deal on the line, they had 48 hours to answer.
This is the questionnaire moment that HR tech founders are walking into right now. The bias and discrimination section is the hardest part of an AI compliance questionnaire — and the one that kills the most deals when answered badly.
Here is what your buyers are asking, and exactly how to answer.
Why HR Tech Gets the Hardest Bias Questions
Under the EU AI Act, AI systems used for recruitment, candidate selection, or evaluation of people in employment contexts are listed in Annex III as high-risk. That single classification changes everything about how your enterprise buyers approach procurement.
High-risk classification means buyers are required — not just encouraged — to conduct due diligence on your system before deploying it. Their procurement and legal teams have checklists. Those checklists ask about bias. And they ask in detail.
If your product touches any of these areas, you are in the high-risk zone:
- Automated resume screening or shortlisting
- AI-assisted candidate ranking or scoring
- Evaluation or assessment of candidates during hiring
- AI that serves job ads to specific demographics
The EU AI Act's Article 10 requires that training data for high-risk systems be "relevant, sufficiently representative, and, to the best extent possible, free of errors." Article 9 requires systematic risk management including testing for bias before the system goes live. Buyers know this. Their questionnaires reflect it.
The 5 Bias Questions You Will Get
Here is what the questionnaire section on bias and non-discrimination actually looks like — and how to answer each one.
Q1: What training data did you use, and what steps did you take to ensure it was representative?
The goal of this question is to understand whether your model was trained on data that reflects historical patterns of discrimination — for example, historical hiring decisions that favored certain demographics.
How to answer: Describe the source of your training data. Be specific: proprietary labelled data, public datasets, customer-provided data, or a combination. Then describe what you did about representativeness — whether you audited demographic balance, whether you excluded certain signals (like zip code or school name as proxies for protected characteristics), and how you handled class imbalance.
Example answer: "Our model was trained on [X million] anonymized hiring outcomes from [Y] enterprise customers, with personally identifiable information removed. We audited training data for demographic balance across gender and ethnicity. We removed variables identified as potential proxies for protected characteristics, including name-based inference and educational institution prestige scores. We retrain the model [frequency] and repeat the representativeness audit with each cycle."
Q2: How do you test for discriminatory outputs?
This is asking about your ongoing bias testing methodology, not just a one-time audit.
How to answer: Name the specific testing methods you use. Disparate impact analysis is the most common — measuring whether your model's outputs (scores, recommendations, rankings) differ significantly across demographic groups. Also mention counterfactual testing if you do it: changing only protected attributes and checking if scores change.
Example answer: "We conduct disparate impact analysis on model outputs [frequency], measuring selection rates across gender, ethnicity, and age cohorts using the 4/5ths rule as a baseline. We run counterfactual tests at each model update cycle, holding all non-protected features constant and varying protected attribute proxies. Results are reviewed by our AI governance lead and documented in our bias testing log."
Q3: What happens when you detect bias?
Buyers want to know your response process, not just your testing process. Detecting bias and doing nothing is worse than not testing.
How to answer: Describe your escalation path: who gets notified, what investigation happens, what remediation looks like (retraining, feature removal, threshold adjustment), and how you communicate to customers.
Example answer: "If disparate impact analysis shows a selection ratio below 0.8 for any protected group, we flag the issue in our internal incident tracker, notify our AI governance lead, and pause deployment of the model update pending investigation. Affected customers are notified within [X] business days. We document the remediation steps taken — retraining, feature removal, or threshold recalibration — and provide customers with a remediation summary."
Q4: Do you provide transparency to candidates about AI use?
This is an Article 13 question (transparency obligations) dressed up as a bias question. Buyers want to know if their end users — job candidates — will know AI is involved in evaluating them.
How to answer: Describe your disclosure mechanism. Is it a notice in the application flow? An FAQ? Do you allow candidates to request human review?
Example answer: "Our platform provides a disclosure notice in the candidate-facing application interface stating that AI is used to assist in evaluating applications. The notice explains what factors the AI considers and how scores are used in the hiring process. Candidates can request human review of any AI-assisted recommendation at any stage, at no cost."
Q5: What documentation is available for regulatory audit?
This is the Article 11 technical documentation question. Buyers ask it because they may need to produce your documentation to a regulator — and they cannot produce what you haven't given them.
How to answer: Tell them exactly what documentation exists. A technical documentation package, a data card, a model card, a bias audit report. Offer to share it under NDA.
Example answer: "We maintain EU AI Act Article 11 technical documentation including: system architecture description, training data description with demographic audit results, bias testing methodology and results by protected category, human oversight mechanisms, and version history with changelog. This documentation package is available to enterprise customers under NDA upon request."
Why Buyers Ask These Questions Now
August 2, 2026 is the compliance deadline for high-risk AI systems under the EU AI Act. Fines for non-compliance reach €35 million or 7% of global annual turnover, whichever is higher. Enterprise buyers — especially large employers in regulated sectors — know their internal compliance teams will scrutinize any AI tool in the hiring stack. Procurement is getting ahead of that scrutiny.
This means the questionnaire you received today is not the last one. It is the first of many. The buyers who ask carefully now will ask again at every contract renewal.
Stop Answering From Scratch
Most HR tech founders answer these questions by typing something into an email and hoping it sounds credible. The problem is that next month a different enterprise buyer will ask the same question slightly differently, and you'll type something slightly different.
Your answers will be inconsistent. Sophisticated procurement teams compare notes. They notice.
Complizo stores your answers to these questions against the specific product features that back them. When the next buyer asks about bias testing, your answer is already there — word-for-word consistent with what you told the last buyer, linked to the feature that actually does the bias testing.
Try Complizo free — paste your first questionnaire.