A BigLaw Firm Just Asked If Your AI Document Review Tool Is Subject to EU AI Act High-Risk Rules: Answering the eDiscovery Procurement Questionnaire
Your sales team just closed an intro call with a Magic Circle firm's IT procurement lead. The follow-up email arrived this morning: a 38-question vendor security and AI governance assessment. Question 11 reads:
"Under the EU AI Act, is your AI system classified as high-risk? If so, provide documentation of your conformity assessment."
Your eDiscovery AI does document relevance scoring, privilege prediction, and issue coding. Is that high-risk? How do you answer?
This post gives you the procurement-ready answer — and the three questions that trip up legaltech CTOs most often in BigLaw due diligence.
Is eDiscovery AI "High-Risk" Under the EU AI Act?
The short answer: it depends on how it's used, and you should say so explicitly.
EU AI Act Annex III lists eight categories of high-risk systems. eDiscovery AI doesn't map cleanly to any of them — unlike hiring tools (point 4) or credit scoring (point 5b), there's no explicit "legal document review" entry.
The argument that it's not high-risk:
- The AI assists lawyers, not makes decisions about individuals
- Outputs are reviewed by attorneys with professional privilege and duty of care
- No individual's rights, employment, or access to services is directly affected by a relevance score
The argument that it could be considered high-risk:
- If used in criminal proceedings, it potentially maps to Annex III point 8 (administration of justice)
- If used to identify documents affecting employment disputes, it may intersect with point 4
- Article 6(3) gives the Commission authority to expand the Annex III list
The right answer for procurement: State your classification position, explain your reasoning, and describe your safeguards regardless of classification. Law firms want to know you've thought about it — not that you've memorized the regulation.
The 3 Questions That Stall LegalTech Deals
1. "Can your AI output be used directly to support a legal conclusion without attorney review?"
What they're asking: Is your AI making legal judgments, or assisting attorneys making legal judgments?
How to answer: Be clear about the product boundary. A well-designed eDiscovery tool surfaces relevance scores and suggested issue codes — it does not conclude "this document is privileged." The attorney confirms. That distinction matters enormously for EU AI Act classification and for professional responsibility rules in most jurisdictions.
If your product does generate privileged/not-privileged recommendations: explain the confidence threshold below which the attorney must independently review, explain how the attorney's override is recorded, and explain that your output is an input to professional judgment, not a replacement for it.
2. "How does your AI handle attorney-client privilege predictions?"
What they're asking: Privilege waiver is catastrophic in litigation. What happens if your AI incorrectly marks a privileged document as non-privileged?
How to answer: This is a product risk question dressed as a compliance question. Answer it as a product question:
- Your privilege model has a configurable sensitivity threshold — firms can tune it toward over-inclusion (lower waiver risk, more manual review)
- Privilege predictions are surfaced with confidence scores, not binary flags
- Your system logs all privilege-predicted documents with their scoring inputs for audit trail purposes
- You offer a "privilege review buffer" workflow where low-confidence predictions route to senior attorney review
If your product doesn't have these features, this conversation tells you what to build next.
3. "What training data was used to build the privilege and relevance models, and was it licensed?"
What they're asking: Did you train on confidential client documents? Do we have a data contamination risk?
How to answer: This is your data governance answer. Under EU AI Act Article 10, high-risk AI systems must use training data that is "relevant, representative, free of errors and complete."
For eDiscovery AI: clarify what training corpus you used (licensed case law, synthetic documents, licensed datasets — not client matter files), confirm whether customer documents are used to retrain shared models (a major concern), and state your data isolation guarantees per customer.
The "do you train on my data?" question is the single fastest dealbreaker in legaltech procurement. Have a crisp, accurate answer ready.
The Procurement Pattern in BigLaw
BigLaw IT procurement moves slowly for a reason: the liability for a wrong AI decision in litigation is enormous. Your buyer's job is to surface risks before the partner group approves the tool.
The CTOs who close these deals are the ones who answer questions before they're asked. Your technical documentation — training data provenance, privilege model architecture, human review workflow — should be pre-written, not assembled in response to a questionnaire.
When question 11 arrives asking about conformity assessments, you want to send a two-paragraph answer within 24 hours, not schedule a three-week internal review.
Build Once, Answer Many
BigLaw firms share vendor lists and questionnaire frameworks. If you answer one firm's questionnaire well, you'll face a near-identical questionnaire from the next firm on your pipeline — with slightly different phrasing and a tighter deadline.
Complizo takes your technical documentation and generates accurate, consistent answers to procurement questions like these. No two questionnaires are identical, but your underlying answers should be.
Try Complizo free at complizo.com