arqmetrica
eu ai act

24 April 2026

EU AI Act for European mid-market companies — a practical 2026 checklist

From August 2026, the EU AI Act enforces risk classification, transparency, and governance obligations on companies deploying AI in Europe. This is the practical 12-item checklist for mid-market organisations preparing now.

7 min read

What's in force from August 2026

The EU AI Act — Regulation (EU) 2024/1689 — phases in across 2025 to 2027. The August 2026 milestone is the consequential one for most mid-market organisations: it activates the full obligations on high-risk AI systems under Articles 6 to 49, the transparency duties on limited-risk systems under Article 50, and the AI literacy requirement under Article 4 that applies to every organisation deploying AI in the EU regardless of risk tier.

For mid-market companies, the question stopped being "should we prepare?" some time in 2025. Through Q2 2026 the question is "are we audit-ready by August?" The Capgemini Research Institute Q4 2024 survey on EU AI Act readiness found that 31% of European mid-market companies have a named AI governance owner. The other 69% will spend Q3 2026 building evidence trails reactively. That is a much harder conversation with regulators than building them prospectively now.

The four risk tiers

The Act classifies AI systems into four tiers. The first move for any mid-market organisation is to inventory its AI use cases and place each one in the correct tier.

Prohibited (Article 5). Subliminal manipulation, exploitation of vulnerabilities of specific groups, social scoring by public authorities, untargeted scraping of facial images, real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions), and emotion recognition in workplaces or educational institutions. These are out as of February 2025 — they are not lawful at any compliance level. Mid-market deployments at risk here are typically inadvertent: HR analytics tools that infer affective state, customer service tools that score callers on emotional vulnerability.

High-risk (Article 6 + Annex III). AI used in employment decisions (CV screening, performance evaluation, allocation of work), credit scoring of individuals, access to essential public and private services, education (admissions, evaluation), critical infrastructure operation, law enforcement, migration. These trigger the full conformity-assessment regime under Articles 16 to 49: technical documentation, automatic logging, transparency, human oversight, accuracy and robustness requirements, post-market monitoring. Mid-market examples: an HR-tech vendor running CV screening for B2B clients, a fintech running credit decisions, an EdTech grading student work.

Limited-risk (Article 50). Chatbots, AI-generated or AI-manipulated content (text, images, audio, video), biometric categorisation, emotion recognition outside the prohibited contexts. The duty is one of disclosure: users must be informed they are interacting with AI or consuming AI-generated content. Most B2B SaaS chatbots, most marketing-content generators, and most AI-enhanced customer-service tools fall here.

Minimal-risk. Spam filters, recommendation systems for non-essential services, inventory optimisation, demand forecasting. No specific obligations under the Act, though the Article 4 literacy requirement still applies.

The 12-item readiness checklist

Each item below maps to a specific Article of the Act. The list is what we would expect a mid-market organisation to be able to evidence on the day of an audit, regardless of risk tier — items 4 to 7 and 9 to 11 narrow to high-risk systems specifically.

  1. AI use case inventory. Every system in the organisation that uses AI, identified and classified per Article 6 and Annex III. The list is dynamic; expect to refresh quarterly. Most mid-market organisations under-count by a factor of three on the first pass — vendor SaaS with embedded AI is the usual gap.

  2. Risk classification per use case. Each entry on the inventory carries a formal risk classification with a one-paragraph rationale. "We think it's not high-risk" is not a classification. Article 6 has objective criteria; apply them.

  3. Data governance documentation per Article 10. For every high-risk system, demonstrable governance over training, validation and testing data: relevant design choices, data collection processes, data preparation, examination for biases, identification of gaps. This is the hardest item for mid-market organisations because it requires evidence going back to the moment the data was first acquired.

  4. Technical documentation per Article 11 and Annex IV. A technical file maintained per high-risk system covering its purpose, design, development, evaluation, monitoring and lifecycle. This is the equivalent of a CE-marking technical file in product safety, and the standard is similar.

  5. Automatic logging of events per Article 12. Required for high-risk systems; we treat it as best practice everywhere. Logs must allow tracing of system operation back to the inputs that produced any given output, retained for the lifecycle of the system.

  6. Transparency disclosures. Per Article 13 for high-risk systems (instructions for use, characteristics, capabilities, limitations) and per Article 50 for limited-risk systems (disclosure of AI interaction, AI-generated content). The Article 50 duties are the ones most mid-market organisations under-comply on.

  7. Human oversight processes per Article 14. Named human overrides for high-risk systems, with the technical and organisational measures that enable them — not just on paper. Includes clear stop conditions and SLAs for human review.

  8. AI literacy programme per Article 4. This applies to ALL organisations deploying AI, not only those with high-risk systems. Staff must have sufficient AI literacy for their role; the literacy requirement is differentiated by role, not uniform. Document the curriculum, the cohorts, the completion rates.

  9. Conformity assessment for high-risk systems per Articles 43 to 49. Either internal control (Annex VI) or third-party (Annex VII) depending on the system. Output is the EU declaration of conformity and CE marking.

  10. Post-market monitoring plan per Article 72. Continuous, documented monitoring of high-risk systems in operation, with a feedback loop into risk management and the technical file. Annual reporting expected.

  11. Incident reporting process per Article 73. Serious incidents — defined in Article 3 — must be reported to the relevant national authority within 15 days. Build the process before the first incident, not after.

  12. Internal AI governance committee. Not a legal requirement under the Act. It is the practical evidence path: a named cross-functional body that reviews items 1 through 11 on a regular cadence and signs off the technical files. Without it, the documentation tends to drift out of sync with operational reality between audits.

What "documented" actually means

Regulators want evidence, not assertions. Specifically: timestamped, version-controlled, signed-off documentation that pre-dates the audit. A document created reactively after a compliance request looks like — and is — a reactive document. The audit trail begins on the day the system is first conceived, not on the day a regulator asks about it.

Three properties separate audit-ready documentation from documentation that fails an audit. Provenance: every claim is traceable to the source data, decision or person that originated it. Cadence: review and sign-off happens on a defined schedule, not on demand. Independence: someone other than the system's owner reviews the documentation and signs it off. The 31% of European mid-market organisations that have a named AI governance owner usually have at least the third property in place. The other 69% almost never do.

A practical heuristic: if your AI documentation is in slide decks, you are not audit-ready. Slide decks are designed for persuasion, not for evidence. If it lives in a versioned register — git-backed Markdown is fine, a Confluence space is fine, a structured wiki is fine — you have a starting position.

Where mid-market typically falls down

The same three failure modes recur in audits we have observed.

The first is the Article 50 disclosure gap. Limited-risk systems — chatbots, content generators — do not feel like regulated AI to most organisations because they are not high-risk. The Article 50 disclosure duty applies anyway, and it is enforceable by the same fines.

The second is the AI literacy gap under Article 4. Organisations that do not have high-risk AI assume Article 4 does not apply to them. It applies to everyone. The literacy programme must be in place, documented, and demonstrably differentiated by role.

The third is the vendor AI gap. Mid-market organisations consume AI primarily through SaaS vendors. Where the vendor classifies the embedded AI as high-risk, the deployer (the organisation using it) inherits responsibilities under Article 26. Most mid-market organisations have not read the Article 26 deployer obligations and assume the vendor carries the full risk. They do not.

What to do today

The Governance & ethics dimension of the Arqmetrica AI Maturity Index scores you against EU AI Act readiness signals specifically: inventory completeness, risk classification discipline, data governance evidence, post-market monitoring posture, incident reporting readiness. The full anchor logic — which questions map to which Articles of the Act — is documented on the methodology page.

If you are the named owner of AI governance in your organisation, this checklist is the agenda for your next two quarterly reviews. If you are not the named owner and there is no named owner, naming one is item zero on the list — without it, items 1 through 12 will not be defensible.