Skip to main content

Command Palette

Search for a command to run...

EU AI Act for HR Tech: What Your Customers Are About to Ask You

Published
8 min read

The EU AI Act is no longer a future concern. Prohibited AI practices have been banned since February 2, 2025. General-purpose AI rules are already enforceable. And the big one — Annex III high-risk AI obligations — hits on August 2, 2026.

If you run a small or mid-size business that develops, deploys, or even uses AI systems affecting EU residents, you need a compliance plan. Not next quarter. Now.

This checklist walks you through every step — from figuring out whether the law applies to you, to generating the documentation that keeps you audit-ready.

Does the EU AI Act Apply to Your Business?

The EU AI Act has extraterritorial reach. You don't need to be headquartered in Europe.

You're in scope if your business does any of the following:

  • Develops AI systems placed on the EU market or put into service in the EU
  • Deploys AI systems that affect people located in the EU
  • Imports or distributes AI systems into the EU market
  • Uses output from AI systems where that output is used in the EU

Even a single EU customer using your AI-powered feature can trigger obligations. A SaaS company in Austin with 50 EU subscribers? In scope. A recruitment platform in London processing EU candidate data? In scope.

Your first action: Map your AI systems against your user base. If any system touches EU residents, keep reading.

Step 1: Build Your AI System Inventory

You cannot comply with a law you can't map to your technology. Over half of organisations lack a systematic inventory of AI systems in production or development — don't be one of them.

For every AI system your company develops, deploys, or procures (including embedded AI in third-party tools and cloud services), document:

  • System name and purpose — what does it do, and why?
  • Your role — are you the provider, deployer, importer, or distributor?
  • Input data types — what data feeds the system?
  • Output and decisions — what does the system produce or influence?
  • Affected users — who is impacted by the system's output?
  • Third-party dependencies — is the AI component from a vendor? Which one?

This inventory is the foundation of everything that follows. Without it, risk classification is guesswork.

Complizo's AI System Inventory lets you register and track every AI system in under 5 minutes — free for up to 3 systems.

Step 2: Classify Each System by Risk Tier

The EU AI Act defines four risk tiers. Your obligations depend entirely on where your systems land.

Unacceptable Risk (Banned)

These AI practices have been prohibited since February 2, 2025:

  • Social scoring by governments
  • Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
  • Emotion recognition in workplaces and educational institutions
  • AI that exploits vulnerabilities of specific groups (age, disability)
  • Untargeted scraping of facial images for facial recognition databases

If any of your systems fall here, stop using them immediately. Fines reach €35 million or 7% of global annual turnover — whichever is higher.

High Risk (Annex III)

This is where most compliance effort concentrates. High-risk AI systems are those used in:

  1. Biometric identification and categorisation of natural persons
  2. Critical infrastructure management — transport, water, gas, electricity
  3. Education and vocational training — exam scoring, admissions decisions
  4. Employment and worker management — recruitment, promotion, termination, task allocation
  5. Access to essential services — credit scoring, emergency response prioritisation, insurance pricing
  6. Law enforcement — evidence evaluation, crime prediction, profiling
  7. Migration and border control — visa and asylum application assessment
  8. Justice and democratic processes — legal research tools, election-related systems

Important exception: An AI system listed in Annex III is not automatically high-risk if it doesn't pose a significant risk of harm or doesn't materially influence decision-making outcomes. But you need to document that assessment — you can't just assert it.

Limited Risk

Systems like chatbots, AI-generated content tools, and emotion detection systems (outside banned contexts) carry transparency obligations. Users must know they're interacting with AI, and AI-generated content must be marked in machine-readable format.

Minimal Risk

Most AI systems fall here — spam filters, AI-assisted search, recommendation engines in non-critical contexts. No specific obligations beyond general product safety law.

Complizo's Risk Classification engine walks you through Annex III classification with guided questions — no compliance consultant required.

Step 3: Meet Your Role-Specific Obligations

Your obligations vary based on your role in the AI value chain.

If you're a provider (you built or branded the AI system): you carry the heaviest obligations. You must implement a risk management system, ensure data governance, produce technical documentation, enable human oversight, and register high-risk systems in the EU database.

If you're a deployer (you use an AI system provided by someone else): you must use the system according to instructions, monitor its operation, conduct a fundamental rights impact assessment for high-risk systems, and keep logs.

If you're an importer or distributor: you must verify the provider's conformity assessment and documentation before placing the system on the EU market.

Most SMBs are deployers — but if you've fine-tuned a model, built a custom pipeline, or put your brand on an AI feature, you may be a provider without realising it.

Step 4: Generate the Required Documentation

For high-risk AI systems, you need six categories of documentation, audit-ready and current:

  1. Model Cards — technical description of the system's capabilities, limitations, and intended use
  2. Data Governance Records — how training and operational data is collected, labelled, and managed
  3. Human Oversight Protocols — how humans monitor, intervene, and override the system
  4. Conformity Assessments — demonstration that the system meets all applicable requirements
  5. Risk Management Records — identification, analysis, and mitigation of risks throughout the system lifecycle
  6. Transparency Notices — clear information for users about the system's functioning and limitations

Producing these from scratch takes weeks with a consultant — and costs €5,000–€50,000+.

Complizo generates all six document types as audit-ready PDFs using AI. Pro plan starts at $99/month.

Step 5: Set Up Ongoing Monitoring

Compliance isn't a one-time exercise. The EU AI Act requires:

  • Post-market monitoring — continuously track system performance and report serious incidents
  • Log retention — maintain operational logs for the periods specified in the Act
  • Incident reporting — report serious incidents to national authorities without undue delay
  • Documentation updates — keep all compliance documents current as your systems evolve

Build this into your engineering and operations workflows now, before the deadline.

Step 6: Train Your Team

The EU AI Act explicitly requires that staff involved in operating high-risk AI systems receive adequate AI literacy training. This includes:

  • Understanding how the AI system works
  • Knowing when and how to intervene or override
  • Recognising signs of malfunction or bias
  • Understanding reporting obligations

Document your training programme and keep attendance records.

The Digital Omnibus: A Delay, Not a Reprieve

On March 26, 2026, the European Parliament voted (569-45-23) to adopt its position on the Digital Omnibus package, which proposes delaying certain AI Act deadlines. Under the Parliament's proposal, Annex III high-risk obligations would shift to December 2, 2027, with sectoral systems pushed to August 2, 2028.

But this is not law yet. The Council and Parliament must still enter trilogue negotiations. The original August 2, 2026 deadline remains legally binding until a final text is adopted.

Even if the delay passes, the smart move is to start now. Companies that wait until 2027 will face the same scramble — except the consultant market will be even more expensive, and regulators will have even less patience.

Getting compliant early gives you a competitive advantage. It's a trust signal for EU customers and partners. It's cheaper to do it methodically over months than in a panic over weeks.

Your 10-Point Quick Checklist

  1. Confirm whether the EU AI Act applies to your business
  2. Build a complete AI system inventory
  3. Classify each system by risk tier (Unacceptable / High / Limited / Minimal)
  4. Determine your role for each system (Provider / Deployer / Importer / Distributor)
  5. Stop any prohibited AI practices immediately
  6. Generate required documentation for high-risk systems
  7. Register high-risk systems in the EU database
  8. Set up post-market monitoring and incident reporting
  9. Train staff on AI literacy and oversight obligations
  10. Schedule quarterly compliance reviews to keep documents current

Start for Free

You don't need a €50,000 consultant or a $200,000 enterprise platform.

Complizo is purpose-built for EU AI Act compliance. Register your AI systems, classify risk, and generate audit-ready documentation — all self-serve, setup in 5 minutes.

Free for up to 3 AI systems. No credit card required. Get started


Complizo provides an EU AI Act compliance framework. It does not provide legal advice. For legal questions specific to your situation, consult a qualified attorney. For the official regulation text, visit EUR-Lex.

More from this blog

Complizo

17 posts