Skip to main content

Command Palette

Search for a command to run...

Why ChatGPT Can't Answer Your Customer's EU AI Act Questionnaire

Published
9 min read

The EU AI Act deadline is August 2, 2026. That gives your business roughly 133 days to get compliant — or face fines up to €35 million.

If you're running a company with 5 to 200 employees and you use AI in any capacity — a chatbot on your website, an AI-powered hiring tool, a recommendation engine — the EU AI Act applies to you. And unlike GDPR, where the early days felt chaotic, regulators have signaled they intend to enforce this one fast.

This EU AI Act compliance checklist breaks down exactly what SMBs need to do, step by step, without the jargon and without the €50,000 consultant fee.

Why SMBs Can't Ignore the EU AI Act

Most compliance content online is written for enterprises with dedicated legal teams and six-figure budgets. That's not you.

Here's the reality for small and mid-size businesses:

The EU AI Act creates obligations for anyone who develops, deploys, or uses AI systems that affect EU residents. That includes a 15-person SaaS startup in Berlin just as much as it includes Google. The fines don't scale down for smaller companies — €35 million or 7% of global annual turnover for prohibited AI practices, €15 million or 3% for other violations.

The difference? Google has a compliance team. You probably don't.

That's exactly why having a clear checklist matters. You need a structured approach that doesn't require a law degree to follow.

Step 1: Build Your AI System Inventory

Before you can classify risk or generate documents, you need to know what AI you're actually using. This sounds obvious, but most companies undercount their AI systems by 40–60%.

Start by cataloguing every AI system your organization touches:

  • AI you built in-house (recommendation algorithms, classification models, NLP pipelines)
  • AI you purchased from vendors (CRM scoring tools, AI-powered analytics)
  • AI embedded in platforms you use (Salesforce Einstein, HubSpot AI features, Copilot integrations)
  • AI used by contractors or partners who process data on your behalf

For each system, document the purpose, what data it processes, what decisions it influences, and who it affects. If it touches EU residents in any way, it's in scope.

Pro tip: Most SMBs discover 2–3x more AI systems than they initially expected once they look at their full vendor stack. Complizo's AI System Inventory automates this cataloguing process and keeps it up to date as you add new tools.

Step 2: Classify Each System by Risk Tier

The EU AI Act defines four risk tiers. Every AI system in your inventory needs to be mapped to one:

Unacceptable Risk — Banned Outright

These AI practices have been prohibited since February 2, 2025. If you're doing any of these, stop immediately:

  • Social scoring systems that evaluate people based on behaviour or personality
  • Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
  • AI that exploits vulnerabilities of specific groups (age, disability, economic situation)
  • Emotion recognition in workplaces and educational institutions (with limited exceptions)

High Risk — The Core of Compliance

This is where most of the regulatory weight falls, and it's where SMBs get tripped up. High-risk AI systems are defined in Annex III of the Act and include AI used in:

  • Employment and worker management (hiring tools, performance monitoring, task allocation)
  • Education and vocational training (admissions decisions, grading, proctoring)
  • Access to essential services (credit scoring, insurance pricing, social benefits)
  • Law enforcement and border management
  • Critical infrastructure management

If any AI system in your inventory falls into one of these categories, you'll face the full set of obligations: risk management systems, data governance, technical documentation, human oversight protocols, accuracy and robustness requirements, and conformity assessments.

Limited Risk — Transparency Required

AI systems that interact with people (chatbots, AI-generated content, emotion detection) must disclose their AI nature. Users need to know they're talking to a machine, and AI-generated content must be labelled as such.

Minimal Risk — No Specific Obligations

Spam filters, AI in video games, inventory management AI — these carry no specific regulatory burden, though voluntary codes of conduct are encouraged.

Not sure where your systems fall? Complizo's Risk Classification tool walks you through the Annex III criteria in plain language and gives you a definitive classification for each system in under 5 minutes.

Step 3: Generate Required Documentation

For high-risk AI systems, the EU AI Act requires six categories of documentation that must be audit-ready before August 2, 2026:

Model Cards

Technical specifications of your AI system — architecture, training methodology, performance metrics, known limitations.

Data Governance Records

How you source, validate, clean, and manage the data your AI systems use. This must cover training data, validation data, and ongoing monitoring data.

Human Oversight Protocols

Documented procedures for how human operators monitor, intervene in, and override AI system decisions. This isn't just a checkbox — regulators want evidence of meaningful human control.

Conformity Assessments

Self-assessments (or third-party assessments for certain biometric systems) demonstrating your AI system meets all applicable requirements.

Risk Management Records

A living document covering risk identification, analysis, evaluation, and mitigation throughout the AI system's lifecycle. This must be updated continuously, not written once and shelved.

Transparency Notices

Clear, accessible information for users about how the AI system works, its intended purpose, and its limitations.

Generating these documents manually takes most companies 3–6 months and costs anywhere from €5,000 to €50,000 when working with compliance consultants. Complizo generates all six document types automatically using AI, producing audit-ready PDFs for a fraction of consultant costs. Free for up to 3 AI systems — no credit card required.

Step 4: Implement a Risk Management System

The EU AI Act doesn't just want documents. It wants an ongoing risk management process that covers the entire lifecycle of each high-risk AI system.

Your risk management system must:

  1. Identify and analyse known and reasonably foreseeable risks
  2. Estimate and evaluate risks that may emerge during intended use and foreseeable misuse
  3. Adopt risk mitigation measures based on your analysis
  4. Test the effectiveness of those measures
  5. Document everything — risk registers, mitigation decisions, test results

This is an ongoing obligation. Risk management doesn't end when you file your initial documentation. You need continuous monitoring and regular reassessment.

Step 5: Establish Data Governance Practices

High-risk AI systems must be trained and operated with data that meets specific quality standards. The regulation requires:

  • Clear criteria for data collection and selection
  • Bias examination and mitigation procedures
  • Identification of data gaps and shortcomings
  • Appropriate data preparation steps (annotation, labelling, cleaning)

For SMBs using third-party AI models (which is most of you), this means documenting what data you feed into the system and how you validate its outputs — even if you didn't build the underlying model.

Step 6: Set Up Human Oversight

Every high-risk AI system must have human oversight measures proportionate to the risks involved. This means:

  • Designated people who understand the system's capabilities and limitations
  • Clear procedures for when and how humans intervene in automated decisions
  • Ability to override or reverse AI decisions
  • Monitoring mechanisms to catch anomalous behaviour

Document who is responsible, what training they've received, and what escalation paths exist.

Step 7: Prepare for Ongoing Compliance

Compliance isn't a one-time event. After August 2, 2026, you'll need to:

  • Monitor your AI systems continuously for performance degradation, bias drift, and new risks
  • Update documentation whenever you modify an AI system
  • Report serious incidents to national authorities
  • Maintain your conformity assessment as systems evolve
  • Stay current as the regulation is refined through implementing acts and standards

The Digital Omnibus: A Possible Extension (But Don't Count on It)

On March 13, 2026, the EU Council proposed delaying the Annex III high-risk deadline to December 2027 for stand-alone AI systems. This sounds like good news, but it is not yet law. The European Parliament must still set its position, and trilogue negotiations haven't begun.

The smart move is to treat August 2, 2026, as your hard deadline. If the extension passes, you'll be ahead of your competitors. If it doesn't, you'll be compliant when enforcement begins.

What This Costs (Realistically)

Here's the honest math for SMBs:

  • Compliance consultants: €5,000–€50,000+ for an initial assessment
  • Enterprise governance tools (like those built for Fortune 500s): $50,000–$200,000+/year
  • Internal team time (DIY approach): 200–500 hours across legal, engineering, and ops

Or you can use a purpose-built tool designed for companies your size. Complizo starts at \(0/month for up to 3 AI systems and scales to \)499/month for unlimited systems. Setup takes 5 minutes. No sales call, no contract, cancel anytime.

Your 133-Day Action Plan

Here's the compressed timeline if you're starting today:

Days 1–14: Complete your AI system inventory and risk classification. This is the foundation everything else builds on.

Days 15–45: Generate all required documentation for high-risk systems. Focus on the six document types: Model Cards, Data Governance Records, Human Oversight Protocols, Conformity Assessments, Risk Management Records, and Transparency Notices.

Days 46–90: Implement your risk management system and data governance practices. Assign human oversight roles and train your team.

Days 91–133: Test everything. Run internal audits. Fix gaps. Build your audit trail.

You don't need a compliance team to do this. You need a structured approach and the right tool. Start your EU AI Act compliance checklist today — free.


Complizo is a self-service EU AI Act compliance platform built for small and mid-size businesses. It is not a law firm and does not provide legal advice. For legal counsel, consult a qualified attorney in your jurisdiction.

More from this blog

Complizo

17 posts