Skip to main content

Command Palette

Search for a command to run...

10 EU AI Act Questionnaire Questions — and How to Answer Every One

Published
11 min read

Your AI systems serve EU customers. The EU AI Act applies to you. Here's exactly what to do — step by step — before enforcement hits.


The EU AI Act is the world's first comprehensive AI regulation, and it doesn't care how big your company is. If your AI systems affect people in the EU, you have obligations — whether you're a 10-person startup or a 200-person scale-up.

The problem? Most compliance guidance is written for enterprises with dedicated legal teams and six-figure budgets. If you're an SMB without a compliance department, you need a checklist that's practical, accurate, and built for your reality.

This is that checklist.

What's the Current Timeline?

Before diving into the checklist, here's where things stand as of March 2026:

Already enforced:

  • February 2, 2025: Prohibited AI practices banned (social scoring, real-time biometric surveillance in public spaces, manipulation techniques)
  • August 2, 2025: General-Purpose AI (GPAI) model obligations in effect — providers must have documentation packages ready for the EU AI Office on request

Coming next:

  • August 2, 2026: High-risk AI system obligations (Annex III) take effect — this is the big one for most SMBs
  • August 2, 2027: Full enforcement across all remaining provisions

Important update: On March 26, 2026, the European Parliament voted to approve the Digital Omnibus package, which proposes delaying the Annex III high-risk deadline to December 2, 2027 (standalone systems) and August 2, 2028 (embedded products). Trilogue negotiations between Parliament, Council, and Commission are expected to begin in April 2026. However, this delay is not yet law — trilogue must conclude and the final text must be adopted. Treat August 2, 2026 as the live deadline and use any potential extension as a head start, not an excuse to wait.

Step 1: Build Your AI System Inventory

You can't comply with what you can't see. The very first thing every SMB needs is a complete inventory of every AI system in use.

What to document for each system:

  • System name and vendor (or "in-house" if you built it)
  • What it does and who it affects
  • Where it's deployed (EU customers? EU employees? Both?)
  • Who owns it internally (the person accountable for compliance)
  • Date deployed and current version

Don't forget third-party AI. If you use an AI-powered hiring tool, a customer service chatbot, an AI credit scoring system, or even AI-assisted medical diagnostics — these all count. You're responsible for the AI you deploy, even if someone else built it.

Pro tip: Most SMBs are surprised to find they use 5–15 AI systems once they actually audit. Start with your software vendor list and ask: "Does this use AI or machine learning?" The answer is increasingly yes.

Start your AI inventory for free with Complizo →

Step 2: Classify Each System by Risk Tier

The EU AI Act uses a four-tier risk classification. Your compliance obligations depend entirely on which tier each system falls into.

Unacceptable Risk (Banned)

These AI uses are prohibited outright. If you're doing any of these, stop immediately:

  • Social scoring systems that evaluate people based on behavior or personal traits
  • Real-time biometric identification in public spaces (with narrow law enforcement exceptions)
  • AI that exploits vulnerabilities of specific groups (age, disability)
  • Emotion recognition in workplaces and educational institutions (with limited exceptions)

High Risk (Annex III — Strictest Requirements)

This is where most SMB compliance work lives. High-risk systems include AI used in:

  • Recruitment and HR: CV screening, interview assessment, hiring decisions
  • Credit and finance: Credit scoring, loan approval, insurance risk assessment
  • Education: Student assessment, admissions decisions, learning optimization
  • Healthcare: Diagnostic assistance, treatment recommendations, patient triage
  • Critical infrastructure: Energy, water, transport management systems

If any of your AI systems fall here, you have significant documentation and governance obligations (covered in Steps 3–7).

Limited Risk (Transparency Obligations)

These systems require transparency but not full compliance documentation:

  • Chatbots (users must know they're interacting with AI)
  • AI-generated content (must be labeled as such)
  • Emotion recognition systems (where permitted)
  • Biometric categorization systems

Minimal Risk (No Specific Obligations)

Most AI applications — spam filters, AI-recommended playlists, inventory optimization — fall here. No specific regulatory requirements, but voluntary codes of conduct are encouraged.

The critical question: Do any of your AI systems touch hiring, credit, healthcare, education, or critical infrastructure? If yes, you almost certainly have high-risk systems that need the full compliance treatment.

Classify your AI systems automatically with Complizo →

Step 3: Implement a Risk Management System

For every high-risk AI system, you need a documented risk management system that runs throughout the system's lifecycle. This isn't a one-time assessment — it's an ongoing process.

Your risk management system must include:

  • Identification and analysis of known and foreseeable risks
  • Estimation and evaluation of risks that may emerge during intended use and reasonably foreseeable misuse
  • Adoption of risk mitigation measures
  • Testing to ensure risks are managed effectively
  • Documentation of all risk decisions and their rationale

For SMBs, this means: Create a risk register for each high-risk system. Review it quarterly. Document what risks you identified, what you did about them, and why. Keep records for the system's commercial lifetime plus 10 years.

Step 4: Get Your Data Governance in Order

High-risk AI systems must meet strict data governance requirements. The EU AI Act cares deeply about the data that trains and feeds your AI.

What you need:

  • Documentation of training, validation, and testing datasets
  • Data quality criteria and governance procedures
  • Bias detection and mitigation measures
  • Evidence that datasets are relevant, representative, and error-free (to the extent possible)
  • Clear records of data sources and processing decisions

If you use third-party AI: Request data governance documentation from your vendor. Under the EU AI Act, deployers of high-risk AI systems have obligations too — you can't simply point to your vendor and say "they handle it."

Step 5: Prepare Your Technical Documentation

This is the documentation package that proves your AI system complies with the EU AI Act. For high-risk systems, you need six key document types:

  1. Model Cards — Technical specifications of each AI model: architecture, training methodology, performance metrics, known limitations
  2. Data Governance Records — How training data was collected, processed, validated, and maintained
  3. Human Oversight Protocols — How humans supervise AI decisions, when and how they can override the system, escalation procedures
  4. Conformity Assessments — Formal evaluation showing the system meets all applicable EU AI Act requirements
  5. Risk Management Records — Your ongoing risk identification, assessment, and mitigation documentation
  6. Transparency Notices — Clear information for users about the AI system's capabilities, limitations, and intended purpose

This documentation must be:

  • Created before the system is placed on the market or put into service
  • Kept up to date throughout the system's lifetime
  • Available to national authorities on request
  • Retained for 10 years after the system is withdrawn

The enterprise approach: Hire consultants at €5,000–€50,000+ per assessment to create these manually.

The SMB approach: Use purpose-built tools to generate audit-ready documentation automatically. Complizo generates all six document types from your system inventory →

Step 6: Establish Human Oversight Controls

The EU AI Act requires that high-risk AI systems are designed to be effectively overseen by humans. This means:

  • Designated oversight personnel — Name the specific people responsible for monitoring each high-risk AI system
  • Override capability — Humans must be able to intervene in or override AI decisions
  • Understanding requirements — Oversight personnel must understand the system's capabilities, limitations, and risks
  • Monitoring procedures — Document how you monitor system performance and detect anomalies
  • Incident response — Create and test a procedure for when things go wrong

For SMBs: This doesn't mean hiring a dedicated AI oversight team. It means formally assigning responsibility, training the assigned person, and documenting your oversight process. One person can oversee multiple systems.

Step 7: Set Up Logging and Post-Market Monitoring

High-risk AI systems must automatically generate logs throughout their operational lifetime, and you need a plan to monitor system performance after deployment.

Logging requirements:

  • Record system inputs and outputs for traceability
  • Log all human oversight decisions and overrides
  • Maintain audit trails that regulators can review
  • Ensure logs are retained for an appropriate period

Post-market monitoring:

  • Define performance metrics and acceptable thresholds
  • Monitor for bias drift, accuracy degradation, and unintended behaviors
  • Establish a process for reporting serious incidents to authorities (within 15 days for providers, or as soon as reasonably practical for deployers)
  • Plan for system updates and re-assessment when changes occur

Step 8: Train Your Team

The EU AI Act requires that personnel involved with high-risk AI systems have sufficient AI literacy. This isn't optional — it's a legal obligation under Article 4.

What "AI literacy" means in practice:

  • Staff understand what AI systems are in use and what they do
  • Oversight personnel can interpret AI outputs and know when to intervene
  • Everyone knows the escalation process for AI-related incidents
  • Training is documented and refreshed regularly

For SMBs: A 60-minute workshop covering your AI inventory, risk classifications, and oversight procedures is a solid starting point. Document who attended and what was covered.

Step 9: Register in the EU Database (If Required)

Providers of high-risk AI systems must register their systems in the EU database before placing them on the market. Deployers of high-risk systems in certain categories (particularly public sector use) must also register.

Check whether your role (provider vs. deployer) and your specific use case trigger the registration requirement. The EU AI Act Compliance Checker can help you determine this.

Step 10: Build Your Compliance Dashboard

Compliance isn't a one-time project — it's an ongoing obligation. You need visibility into your compliance posture at all times.

Track these metrics:

  • Number of AI systems inventoried vs. estimated total
  • Risk classification status for each system
  • Documentation completeness (which documents are done, which are pending)
  • Outstanding risk items and mitigation status
  • Training completion rates
  • Last review dates for each system
  • Upcoming deadlines and renewal dates

The goal: If a regulator knocks on your door tomorrow, you can demonstrate your compliance posture in minutes, not weeks.

Get a real-time compliance dashboard with Complizo — free for up to 3 AI systems →

What Happens If You Don't Comply?

The penalties are severe and proportional to the violation:

  • Prohibited AI practices: Up to €35 million or 7% of global annual turnover (whichever is higher)
  • High-risk system violations: Up to €15 million or 3% of global annual turnover
  • Incorrect information to authorities: Up to €7.5 million or 1% of global annual turnover

For SMBs, these fines are designed to be proportionate — but "proportionate" to 3% or 7% of your revenue is still existential for most small businesses.

SMB-Specific Advantages Under the EU AI Act

The regulation does offer some relief for smaller organizations:

  • Regulatory sandboxes: SMEs and startups get priority access, free of charge, to test AI systems under regulatory supervision
  • Simplified documentation: Where feasible, SMEs can use simplified forms of technical documentation
  • Awareness resources: The EU AI Office provides tailored guidance and dedicated communication channels for SMBs
  • Proportionate fines: Penalties are capped at proportionate levels for SMEs (though still significant)

Your 30-Day Quick Start Plan

If you're starting from zero, here's how to make meaningful progress in 30 days:

Week 1: Build your AI system inventory. List every AI tool your company uses, develops, or deploys.

Week 2: Classify each system by risk tier. Identify which systems are high-risk and require full compliance.

Week 3: Start documentation for your highest-risk system. Create the risk management record and data governance documentation first.

Week 4: Assign human oversight roles, schedule team training, and set up your compliance tracking process.

Beyond 30 days: Generate your remaining compliance documents, establish post-market monitoring, and build your ongoing review cadence.


Start Your Compliance Journey Today

Complizo is the self-service EU AI Act compliance platform built for SMBs. Register your AI systems, classify risks automatically, and generate all six required document types — in minutes, not months.

Free for up to 3 AI systems. No credit card required. Setup in 5 minutes.

Start for free at complizo.com →


Complizo provides a compliance framework to help businesses meet their EU AI Act obligations. This content is for informational purposes only and does not constitute legal advice. For legal questions specific to your situation, consult a qualified legal professional. For the latest regulatory text, refer to EUR-Lex.

More from this blog

Complizo

17 posts