Skip to main content

Command Palette

Search for a command to run...

A General Counsel Just Asked How Your AI Contract Drafting Tool Was Built: Answering the Article 13 Transparency Questions for LegalTech Founders

Updated
4 min read

A General Counsel Just Asked How Your AI Contract Drafting Tool Was Built: Answering the Article 13 Transparency Questions for LegalTech Founders

The email came from the legal operations manager at a 200-person professional services firm in Belgium. They had been trialing your AI contract drafting tool for six weeks. Before they signed the annual subscription, she sent a questionnaire.

Section 4 was the problem:

"Under EU AI Act Article 13, please describe how your system generates contract language. Specifically: what training data was used, how does the system distinguish between jurisdictions, and how can our legal team identify which clauses were AI-generated versus drawn from our own template library?"

LegalTech CTOs need to decide quickly whether their contract drafting tool is classified as high-risk under the EU AI Act. The short answer: it depends on what the tool does. AI systems that assist lawyers in drafting routine commercial contracts are not automatically high-risk. AI systems used in the administration of justice — where the AI substantially influences a legal determination — sit in Annex III, category 8.

Most contract drafting tools fall outside Annex III. But procurement teams will still send you Article 13 questions, because their general counsel told them to. The question lands regardless of your classification.

Here is how to answer it.

What Your Customer Is Actually Asking Under Article 13

Article 13 sets transparency requirements for high-risk AI. Even when your tool is not classified high-risk, enterprise legal buyers use Article 13 as their due diligence framework. The three things they want to know:

1. What training data was used to build the model?

Be specific about what you can disclose. If you fine-tuned a foundation model on a corpus of commercial contracts, say so. Name the source categories: licensed contract databases, publicly available transaction documents, internally annotated templates. You do not need to disclose your full training pipeline. You need to give a legal operations manager enough to assess whether the training data is relevant to the documents your tool will produce.

The question European procurement teams ask most consistently: "Was EU-jurisdiction contract language included in training, or only US contract language?" If you cannot answer this, your tool will lose deals to competitors who can.

2. How does the system distinguish between jurisdictions?

Most AI contract tools are trained on a mixture of US and UK English-language commercial contracts. When a Belgian firm uses the tool to draft a Belgian law services agreement, does the system understand it is working in a civil law jurisdiction? Does it understand that penalty clauses function differently under Belgian law than English law?

Your answer should describe the mechanism honestly. Options include: jurisdiction-specific prompt routing, jurisdiction flags in the system configuration, fine-tuned models per jurisdiction, or explicit guidance that the tool generates first drafts requiring local counsel review before signature. Describe what you have built. Do not describe what you plan to build.

3. Can the legal team identify AI-generated clauses?

This is the Article 13 transparency question in its most concrete form. The procurement team wants to know whether a lawyer reviewing the output document can distinguish which clauses came from the AI and which came from the client's own precedent library.

If your tool marks AI-generated content — with a highlight, a comment flag, a metadata tag, a side-by-side view — describe that mechanism. If it does not, this will be the weakest part of your submission. Legal teams buying AI drafting tools in Europe in 2026 treat clause-level attribution as a minimum acceptable feature, not a premium one.

The Training Data Liability Question

Section 5 of the same questionnaire will ask about intellectual property and training data. The usual form: "Does your training data include copyrighted contract templates? If so, how do you manage IP risk for outputs your tool generates?"

This is not an EU AI Act question. It is a contractual risk question. Answer it by describing your data licensing position: whether training data was licensed or public domain, what warranties you provide about output originality, and whether your service terms include any indemnification position for IP claims arising from AI-generated contract language.

If you do not have a clear answer to this, your legal counterpart at the prospect firm will raise it in contract negotiations. It is better to prepare the answer before the negotiation starts.

The Answer That Wins the Deal

General counsel offices evaluating AI contract tools are not looking for perfection. They are looking for vendors who understand what questions matter and can answer them without making the buyer's legal team reconstruct what the vendor actually built.

The deal-winning answer to Section 4 is not the most technically complete one. It is the most direct one — it acknowledges the tool's real limitations without hedging everything into uselessness, and it gives the GC something defensible to present if their own leadership asks how they vetted the vendor.

Try Complizo free at complizo.com

More from this blog

Complizo

30 posts