Your LegalTech Product Just Got an AI Act Questionnaire from a BigLaw Buyer — Here's How to Answer It
Two weeks ago a legaltech founder forwarded me the procurement packet her company had just received from a top-20 European law firm. Buried on page nine, in a section labeled "AI System Conformity," was a block of twenty-one questions. Every question mapped to an article of the EU AI Act. The firm wanted answers before it would renew her contract.
She had not expected this. Her product is an AI-assisted contract review tool — not a court-facing system. She did not think the AI Act applied to her. Her buyer's general counsel disagreed. With the August 2, 2026 enforcement date approaching, that disagreement is happening in every procurement cycle right now.
If you build legaltech that touches contract analysis, e-discovery, legal research, matter intake, document review, or any AI output that shapes a legal opinion, here is what European law firms and in-house legal teams are about to ask you — and how to answer.
The Category 8 Confusion That Every LegalTech Founder Hits First
Annex III, Category 8(a) of the AI Act lists as high-risk:
"AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts."
Read strictly, that description covers AI used by judges and their clerks — not AI sold to private law firms or in-house counsel. Many legaltech founders read it and conclude they are out of scope.
Their buyers read it differently.
Law firms and in-house legal teams are now routinely classifying their AI vendors as high-risk, even when the vendor sits outside Category 8(a), because:
- Contract review AI can shape a legal position that ends up in front of a judicial authority.
- E-discovery AI determines which evidence is produced — and Annex III Category 8 extends in practice to evidentiary selection when a tribunal relies on it.
- Legal research AI that suggests binding precedent can be deemed a Category 8 input if the firm's output depends on it and a court considers it.
- Firm-side AI that feeds into lawyer-assisted decisions that materially affect a party's legal rights is increasingly being flagged as adjacent-to-high-risk by cautious procurement teams.
The net effect is the same: your buyer asks you for the paperwork of a high-risk system, even if your own classification says you are limited-risk. Arguing classification with a buyer rarely wins the deal. Having the answers ready does.
The Seven Questions Your Law-Firm Buyer Will Actually Ask
After reviewing fifteen legaltech vendor questionnaires sent by European firms in the last ninety days, seven questions appear in almost every one. They map directly to Articles 9 through 15 and 50 of the AI Act.
1. "What is the intended purpose of your AI system, and is it confined to assistive rather than autonomous legal output?"
Article 13 requires you to declare your intended purpose precisely. Law firms want to see the words "decision support" rather than "decision making." If your marketing copy says the tool "drafts" or "decides," your legal answer needs to clarify the human-in-the-loop boundary that Article 14 requires.
2. "Have you classified your system under Annex III, and if not, why not?"
This is the Category 8 question, and the worst answer is silence. If you believe you are limited-risk or minimal-risk, write a one-paragraph rationale. If you sell to judicial authorities, admit Category 8(a). If your buyer plans to use your output in court filings, acknowledge that the deployer may have to treat it as high-risk on their side under Article 26.
3. "Can we see your Article 11 technical documentation?"
Annex IV specifies the contents: architecture, training methodology, data provenance, performance metrics, limitations, and known failure modes. Law firms will read this. They have partners who read patents and contracts for a living. Generic model cards will not pass review. Firm-specific questions typically include "how does your model behave on Dutch labor law versus German labor law," so be prepared to document performance segmentation.
4. "What are the data governance controls for the documents we upload?"
Article 10 is a headline article for legaltech. Your buyer's documents contain privileged, confidential, and often personal data. They want to see: where the documents are stored, who can train on them, how deletion works, and whether outputs leak across tenants. Under GDPR Article 22, automated decisions with legal effects are already restricted — that bar applies doubly here.
5. "How do lawyers override or reject your AI's output, and is that override logged?"
Article 14 human oversight questions in legaltech are the most specific of any vertical. Firms do not want a disclaimer. They want override workflows, review queues, reason codes when a lawyer disagrees with the AI, and an audit trail their partners can defend to a regulator or to a client in malpractice discovery.
6. "How do you handle hallucinations — specifically, fabricated citations?"
This question will appear on every legaltech questionnaire from now through the next decade. US sanctions against lawyers who filed briefs with fabricated AI-generated case citations have made European firms paranoid. Article 15 covers accuracy and robustness. Your answer needs to include citation verification, source linking, and a measurable hallucination rate on a public benchmark — not a marketing claim.
7. "Do you disclose to end users that they are interacting with an AI system?"
Article 50 transparency applies. For legaltech, this usually means both the lawyer using the tool and, where applicable, the client or counterparty whose document is being analyzed. Firms want to see the exact disclosure language and where it appears in the product.
The Privilege and Confidentiality Layer Most Founders Miss
Legaltech sits on top of two regimes that fintechs and HR tech vendors do not face as sharply. Attorney-client privilege and professional-secrecy rules in many EU jurisdictions mean that a law firm cannot send client documents to a vendor who could, under any circumstance, become a data controller with independent access.
Your buyer's AI Act questionnaire will almost always be accompanied by a privilege and confidentiality annex. The two must be answered as one system. If your AI Act answer says "our training pipeline uses customer data to improve the model" and your confidentiality answer says "we never use client data for training," you will fail the review — not because either answer is wrong, but because they contradict. Procurement teams are now reading these documents side by side.
Why Answer Consistency Is Harder in LegalTech Than Anywhere Else
Law firms talk to each other. The in-house privacy counsel at a Magic Circle firm sits on the same ALM panels as the general counsel at a top-ten Continental firm. When their questionnaires go back to the same vendor, they compare notes.
Most legaltech companies I work with have seven to twelve open procurement questionnaires at any time. Each is being answered by a different account executive or solutions engineer. Your first answer and your seventh answer are rarely written by the same person, and rarely with the same precision. Two months later, the same AI Act fact appears three different ways in three different firms' files.
That is the failure mode that turns a closed deal into a renewal loss. Not regulation. Inconsistency.
What to Do Before Your Next Law-Firm Questionnaire
You have less than four months until enforcement. Here is the minimum legaltech-specific preparation:
Step 1: Write your Annex III classification rationale in a single paragraph. If you are limited-risk, explain why. If your buyers are likely to treat you as high-risk anyway, write the answer as if you were.
Step 2: Build the seven answers above as your canonical response. Anchor each one to a specific article of the AI Act. Include the Article 10 data-governance answer as a standalone block, because it will be lifted verbatim into confidentiality annexes.
Step 3: Decide your hallucination policy in writing. A verifiable citation rate, a public benchmark, and a rollback process. Law firms respect this more than any marketing language.
Step 4: Route every future questionnaire through one verified answer set. The same answer goes to every firm, every time — until the underlying product or control changes.
Complizo does this for legaltech teams specifically. Paste your customer's questionnaire — whether it is a twenty-question short form from a boutique firm or a two-hundred-question BigLaw vendor assessment — and every AI Act, privilege, and confidentiality question maps to the same verified answer set you built once.
Try Complizo free at complizo.com
LegalTech buyers are the most detail-oriented buyers in the EU SaaS market. They read your answers the way a partner reads a contract. The legaltech companies that will keep closing deals through August 2026 are the ones whose AI Act answers are ready, specific, and identical every single time.