A European Bank Just Asked Whether Your AI Document Intelligence Tool Is Built on a Foundation Model and What the EU AI Act Requires: Answering the GPAI Transparency Questions
A European Bank Just Asked Whether Your AI Document Intelligence Tool Is Built on a Foundation Model and What the EU AI Act Requires: Answering the GPAI Transparency Questions
The questionnaire came from a European investment bank's vendor risk team. It's four pages, most of it familiar territory — your risk management process, your data governance, your human oversight controls. But on page three there's a section you haven't seen before:
"Does your AI system incorporate or rely on a General-Purpose AI (GPAI) model as a base or component? If so, please identify the GPAI model provider and describe what EU AI Act Chapter V obligations apply to your organization and theirs."
If your document intelligence platform, AI analyst copilot, or financial data extraction tool runs on top of a foundation model — GPT-4, Claude, Gemini, Mistral, or similar — this question is directed squarely at your architecture. And the honest answer is more nuanced than a yes or no.
Here's how to answer every GPAI-related question in a financial services vendor questionnaire.
What Chapter V of the EU AI Act Actually Says About GPAI
Chapter V of the EU AI Act (Articles 50–56) governs General-Purpose AI models — large AI models trained on vast data that can perform a wide range of tasks. The regulation distinguishes between:
GPAI model providers: Organizations that train and release a GPAI model (e.g. OpenAI, Anthropic, Google DeepMind, Mistral). These have the heaviest obligations: technical documentation, transparency to downstream users, copyright policy compliance, and — for models with systemic risk — third-party evaluations and adversarial testing.
Downstream providers: Organizations (like most B2B SaaS companies) that build applications on top of a GPAI model. These companies are not GPAI model providers themselves. However, they have pass-through disclosure obligations — they must inform their deployers (your banking customers) about which GPAI models are integrated and what the GPAI provider's terms, capabilities, and limitations are.
Deployers: The bank asking you this question. They need to understand what's inside the AI tool they're deploying to satisfy their own risk management obligations.
The Questions Buyers Ask — and How to Answer Them
"Does your system incorporate a GPAI model?"
This is the threshold question. Answer honestly and completely. If you use OpenAI's API, the Anthropic API, a Google Cloud AI service, or a hosted Mistral model, the answer is yes.
Example answer: "Yes. Our document intelligence platform uses [model name] provided by [provider name] as its core language model. The model is accessed via API and processes document content submitted by end users. We do not train or modify the underlying model — we operate as a downstream provider building on a GPAI model."
If you use a fine-tuned version of an open-weight model (e.g., Llama or Mistral) that you've adapted on your own infrastructure, the situation is more nuanced — you may be closer to a GPAI provider than a downstream provider. Be accurate.
"What Chapter V obligations apply to your organization as a downstream provider?"
As a downstream provider (not the GPAI model provider), your primary obligation under Chapter V is transparency toward the next entity in the value chain — your deployer (the bank). Specifically, you are expected to:
- Identify which GPAI model(s) your system uses and who provides them
- Pass on relevant information from the GPAI model provider's documentation to your deployers
- Not misrepresent the capabilities or limitations of the underlying model
You are not required to replicate all the GPAI provider's obligations — those sit upstream. But you need to be a transparent conduit.
"What are the GPAI provider's obligations, and are they meeting them?"
The GPAI model provider (OpenAI, Anthropic, Google, Mistral, etc.) must:
- Maintain and publish technical documentation for their model
- Comply with EU copyright law in training data collection
- Publish and maintain a policy on copyright compliance
- Provide downstream providers with sufficient information about the model's capabilities and limitations to allow those providers to meet their own obligations
For GPAI models with systemic risk (models trained with more than 10^25 FLOPs, which includes GPT-4, Claude 3 Opus, and Gemini Ultra tiers), additional obligations apply: adversarial testing, incident reporting to the EU AI Office, and cybersecurity measures.
A well-prepared answer: "Our GPAI model provider is [name]. They publish technical documentation and a model card that we have reviewed. Their compliance obligations under Chapter V are their own, but we have confirmed they operate a transparency and documentation program consistent with the EU AI Act's GPAI requirements. We can share their publicly available documentation and our API terms, which address intended use and known limitations."
"Can you identify specific limitations of the GPAI model relevant to financial document analysis?"
This is where many vendor responses fall short. Buyers want you to be specific about:
- Hallucination risk: Foundation models can generate plausible-sounding but incorrect text. For financial document intelligence, this means extracted figures or summarized clauses should always be verified by a human reviewer before any operational use. Your system documentation should say this explicitly.
- Context window limitations: Large documents may require chunking, which can cause the model to miss cross-document references. Document this and specify how your system handles it.
- Language and jurisdiction specificity: GPAI models are predominantly trained on English text and may have lower accuracy on non-English financial documents or jurisdiction-specific terminology. If your bank customer processes documents in German, Dutch, or Polish, this matters.
"Does your GPAI model provider have a EU AI Act compliance program in place?"
The major commercial GPAI providers have each published EU AI Act compliance documentation. As of May 2026, providers including OpenAI, Anthropic, Google, and Mistral have published or announced compliance frameworks for their models under Chapter V. You should reference the most current public documentation from your provider.
If you cannot confirm your GPAI provider's compliance status, that is itself a material answer — and a prompt to review your vendor risk program.
"What happens to our data when it is processed by the underlying GPAI model?"
This is a data protection question wrapped in a GPAI frame. Separate it cleanly:
"Documents processed through our platform are transmitted to [GPAI provider name] via their API under a data processing agreement. [GPAI provider] does not use API-submitted data to train their models [confirm this with your DPA]. Data is processed within [EU/US/specified region] infrastructure under [applicable transfer mechanism]. Our DPA with [GPAI provider] is available for review."
If your GPAI provider does use API data for training, you need to disclose this. Most enterprise-tier API agreements do not — but verify your contract.
The Upstream/Downstream Map Buyers Are Building
Enterprise financial services buyers running GPAI questionnaire sections are trying to build a supply chain map of their AI system. They want to know:
[GPAI Provider] → [Your Company (downstream provider)] → [Bank (deployer)] → [Bank's customers (affected persons)]
Each node in this chain has obligations. The questionnaire is asking you to document your node accurately. The bank is filling in theirs.
Your job is to make your node's documentation complete, accurate, and easy to pass upward — so the bank's legal and risk team can file it without further questions.
What to Include in Your GPAI Questionnaire Response Package
A complete response to a GPAI section in a financial services procurement questionnaire should include:
- Identity of the GPAI model(s) used (provider, model version, API tier)
- Your role clarification (downstream provider, not GPAI model provider)
- Link or attachment: GPAI provider's publicly available model card or technical documentation
- DPA reference: Your data processing agreement with the GPAI provider confirming data handling terms
- Limitation disclosure: At least three specific limitations relevant to the buyer's use case
- Human oversight note: How your system design ensures human review before operational use of AI outputs
Bottom Line
The GPAI section of a financial services vendor questionnaire is not a trick question — it's a documentation request. Banks need to know what's inside the AI they're deploying, who made it, and what its known limits are. The companies that answer clearly and completely close procurement cycles faster than those who hedge or obscure their architecture.
Try Complizo free at complizo.com — paste in your customer's GPAI questionnaire section and get a complete, sourced answer set in minutes.