Your Lending Customer Just Asked How Your Credit Underwriting AI Avoids Discriminating: How to Answer the Fairness and Bias Section
Your Lending Customer Just Asked How Your Credit Underwriting AI Avoids Discriminating: How to Answer the Fairness and Bias Section
The questionnaire came from a Dutch cooperative bank. Thirty-two pages, covering your AI-assisted credit underwriting tool. They had been a paying customer for eight months and their annual vendor review was overdue.
Section 7 stopped you:
"Under EU AI Act Article 10, please describe your methodology for ensuring the model does not perpetuate or amplify bias against protected groups. Provide your last bias audit report or equivalent documentation. If no audit has been conducted, describe your planned timeline."
Fintech CTOs who build credit underwriting, loan origination, or creditworthiness scoring tools are inside the EU AI Act's high-risk classification. Annex III, category 5b: AI systems used to evaluate the creditworthiness of natural persons or establish their credit score. The documentation burden that comes with that classification is substantial.
Here is how to answer the sections that arrive first.
What Article 10 Actually Requires
Article 10 sets data governance obligations for high-risk AI systems. In the creditworthiness context, three requirements dominate every procurement questionnaire:
1. Relevant and representative training data. Your training dataset must reflect the population of borrowers your model will be used on. If you trained on US borrowers and are now underwriting Dutch borrowers, that gap matters. Your answer needs to address geographic and demographic representativeness directly.
2. Examination for possible biases. Before training, you must examine the dataset for biases that could produce discriminatory outcomes. After training, you must examine the live outputs. Both steps need to be documented.
3. Appropriate data governance. You need documented processes for how data was collected, labeled, cleaned, and enriched — and who approved each step.
The procurement question is usually: "What protected characteristics were examined in your bias audit?"
How to answer: List the protected characteristics covered by the EU's Equal Treatment Directives and relevant national law — gender, racial or ethnic origin, religion or belief, disability, age, sexual orientation. Describe which of these your bias analysis covered, what statistical test you used (disparate impact ratio, equalized odds, calibration across groups), and what you found. If results exceeded your threshold, describe what you changed.
What your customer actually needs: documentation their regulator — the Dutch Central Bank (DNB) or European Banking Authority (EBA) — can review if they audit the deployer's AI use. Your answer is an input to that audit file.
Answering the Model Explainability Section
Section 8 of the same questionnaire will ask about explainability. The question is usually: "How does your system generate a credit decision? Can a loan officer see why an applicant was declined?"
Article 13 requires that high-risk AI outputs be interpretable by the deployer. For a credit underwriting tool, this means the loan officer must be able to see which features drove the decision — not just the score.
How to answer: Describe your explainability method. Most production credit models use SHAP (SHapley Additive exPlanations) or LIME (Local Interpretable Model-agnostic Explanations) to attribute the decision to input features. Name the method you use, describe what the loan officer sees when they open an application, and confirm that the explanation is stored in the applicant record.
If your system returns a score with no feature attribution visible to the loan officer, your answer to this section will be the weakest part of your submission. Many enterprise banking customers will not proceed with vendors who cannot answer this credibly.
The Human Oversight Question That Follows
After fairness and explainability, Article 14 arrives: can the loan officer override the AI's recommendation? What happens to the audit trail when they do?
The EU AI Act requires that high-risk AI systems allow qualified humans to override outputs. Your answer should confirm:
- Overrides are possible at the individual application level
- The override reason is stored alongside the AI recommendation
- A qualified, designated person — not any user — is the authorized override authority
- The override record is retained for the period required by your customer's regulatory obligations
The Volume Problem
Thirty-two pages from one bank. Next month it will be a German Sparkasse with a different format and twenty-six pages. The month after, a Nordic fintech partner with eighteen pages and a tight procurement deadline.
The questions are structurally the same across institutions. The wording is always slightly different. Answering each one from scratch is the wrong approach.
Try Complizo free at complizo.com