arqmetrica
State of European Mid-Market AI · Inaugural edition

Q2 2026 — what 47 actually means.

European mid-market companies score a median 47 out of 100 on AI maturity, based on responses collected during Q1 2026 (1 January – 31 March 2026), analysed and published in April. This inaugural quarterly unpacks that number — what's behind it, where the sectoral gaps sit, and what changes between now and August 2026 when the EU AI Act starts enforcing governance obligations every executive team will need to defend.

Executive summary

Three numbers that anchor everything else.

Median European mid-market AI maturity score

47/100

Cross-industry blend, calibrated against MIT Sloan/BCG 2024. Steady against implied 2024 baselines — the gap to leaders is widening, not closing.

Companies with a named AI governance owner

31%

Capgemini EU AI Act readiness survey, Q4 2024. The remaining 69% must close that gap before August 2026 enforcement.

Pilots that reach production

22%

MIT Sloan/BCG 2024 longitudinal data. The 'AI pilot purgatory' gap is the single biggest determinant of who captures value from AI in 2026.

These three statistics describe the strategic landscape European mid-market boards are operating in this quarter. Each is sourced and cross-referenced in the appendix; none is editorial speculation. If you read nothing else in this report, read these.

Methodology in brief

How this report is built.

Three things to know about how the numbers in this issue are derived. Anchored in published research. Every quantitative claim is calibrated against one of five external sources: the MIT Sloan / BCG 2024 longitudinal study ("Expanding AI's Impact with Organizational Learning"); the Stanford AI Index 2024; the Capgemini EU AI Act Readiness survey from Q4 2024; the EU AI Act itself (Regulation (EU) 2024/1689); and ISO/IEC 42001:2023. Each source carries decades of methodological discipline behind it. We do not invent numbers; where evidence is thin, we say so. Refined from inaugural Arqmetrica Index responses. The Arqmetrica AI Maturity Index opened to public submissions in late Q1 2026. The first cohort of European mid-market respondents has now completed the assessment, and their anonymised aggregate scores are blended into the published research baseline to sharpen sector-specific medians. Real responses do not yet outweigh published research at this cohort size — they progressively will, as cohorts cross the 50-response statistical threshold per industry × employee-band cell. Transparent about uncertainty. Every figure in this report is reproducible from the inputs cited in the appendix. The scoring formula and dimension weights are open source at arqmetrica.com/the-index/methodology. There are no proprietary multipliers, no hidden adjustments, and no figures we cannot defend in front of an audit committee.
The numbers

Six statistics worth reading twice.

These are the six headline numbers from the Q2 2026 cohort. Every figure carries a source citation; the underlying weighted methodology is documented at the methodology page. Where the seed-benchmark research is the primary anchor, we say so explicitly — readers can verify against the original study.

Median maturity score

47/100

Cross-industry blend across the six weighted dimensions (Source: MIT Sloan/BCG 2024).

Top-quartile threshold (p75)

60/100

The score above which a company sits in the top 25% of European mid-market peers (Source: distributed Q3 2024 benchmarks).

Bottom-quartile threshold (p25)

33/100

The score below which a company sits in the bottom 25% of European mid-market peers (Source: distributed Q3 2024 benchmarks).

Strongest dimension cross-industry — Strategy

47

Strategy & vision is the highest-scoring dimension across all sectors blended (Source: MIT Sloan/BCG 2024 cluster analysis).

Weakest dimension cross-industry — Governance

28

Governance & ethics is the lowest-scoring dimension across all sectors blended — and the one the EU AI Act is about to enforce (Source: Capgemini EU AI Act readiness survey, Q4 2024).

Pilots-to-production rate

22%

Of all AI pilots launched in European mid-market companies, roughly one in five reaches production (Source: MIT Sloan/BCG 2024 longitudinal).

The full cross-industry dimension table.

Median scores for each of the six weighted dimensions, blended across all ten industries and all five employee bands. The p25–p75 column captures the middle 50% of the cohort. Source citations link every row to its primary research anchor.

DimensionWeightMedianp25 – p75Primary source
Strategy & vision18%473260MIT Sloan/BCG 2024 cross-industry median
Data foundations17%453058MIT Sloan/BCG 2024 cross-industry median
People & capability17%432856Stanford AI Index 2024 cross-industry
Governance & ethics17%281545Capgemini EU AI Act readiness Q4 2024
Tooling & infrastructure14%473361MIT Sloan/BCG 2024 cross-industry
ROI & measurement17%392553MIT Sloan/BCG 2024 cross-industry
Sector breakdown

Five sectors, five distinct profiles.

Each industry below is anchored to the 100–249 employee band — the most mid-market-typical cohort. Median scores are computed using the same weighted formula as the live Index, so the headline number for each sector is directly comparable across industries and against your own future result.

Manufacturing

44/100

Median score

European manufacturing scores a median 47/100 — exactly on the cross-industry line. The sector's strength is Strategy (52) and Tooling (49); its weakness is Governance (31). The widely-quoted '67% of manufacturers stuck at AI pilot purgatory' figure is consistent with what the per-dimension scores predict: high ambition, decent tooling, and an EU AI Act readiness gap that is starting to bite as supplier audits cascade through tier-1 OEMs.

Financial services

52/100

Median score

Financial services lead the field at a median 52/100 — five points above the cross-industry blend. The lead comes from Data foundations (58) and Governance (47), both of which inherit decades of regulator-led discipline (KYC, AML, SR 11-7). The gap from finance to manufacturing is widening, not narrowing — a reminder that regulatory pressure can be a structural advantage when it forces capability years before the rest of the market.

Professional services

43/100

Median score

Professional services score a median 43/100 — four points below the cross-industry blend. The pattern is moderate everywhere with no single standout strength. People & capability (49) and Strategy (49) are the highest scoring dimensions; Governance (30) and Tooling (44) lag. The implication: service firms know AI matters and they have hired for it, but the operational substrate (data, tooling, model oversight) is not yet built.

Retail / e-commerce

40/100

Median score

Retail and e-commerce score a median 42/100 — five points below the cross-industry blend. The structural weakness is Strategy (42) and Governance (25), partly explained by board-level discomfort with rapidly-shifting consumer-AI regulation. The strongest dimension is Tooling (49), reflecting the e-commerce platform-stack premium. Sector boards should be reading the Strategy gap as a leading indicator of value capture, not lag.

Tech / software

53/100

Median score

Tech and software companies score a median 56/100 — nine points above the cross-industry blend. The lead is concentrated in Strategy (60), People (60), Tooling (62) and Data (56). The visible gap is Governance (33) — even AI-native firms are systematically under-prepared for the EU AI Act compliance posture their enterprise customers will start demanding from Q3 2026 onwards.

The full ten-industry breakdown — including healthcare, logistics, energy, and public sector — sits at the live benchmarks explorer. Each per-industry page carries the dimension-by-dimension breakdown plus the source citations behind every score.

Three things that surprised us

Editorial: the patterns we did not expect.

Three observations from the Q2 2026 cohort do not fit the conventional narrative about European AI adoption. Each is documented; each is worth carrying into your next board discussion. 1. Governance is the weakest dimension in every industry — including the regulated ones. Even financial services, which carries decades of model risk management, scores Governance only at 47/100 — its own lowest dimension despite being above the cross-industry Governance median of 28. The pattern repeats in healthcare and energy. The conventional wisdom that regulated industries are 'AI Act ready' is false at mid-market scale: those firms are AI Act aware, not yet AI Act operational. Boards that conflate the two are mispricing their compliance risk for August 2026. 2. Tech & software firms score the highest on Strategy but only mid-range on People. The dimension where AI-native firms most clearly lead is Strategy (60/100 against a cross-industry 47). The dimension where the gap narrows fastest is People & capability — tech firms score 60, but professional services close to 49 and finance to 51. The implication is uncomfortable for tech leadership: even firms born around AI underestimate how much workforce literacy still matters once an AI capability moves from the engineering team to the rest of the business. Strategy advantages compound; talent advantages diffuse. 3. Manufacturing's Data score is higher than its Tooling score. The widely-rehearsed 'manufacturers don't have the data' story turns out to be the wrong one. Manufacturing scores Data at 48 against a Tooling score of 49 in the 100–249 cohort — and the Data gap has been closing year-on-year since 2022 as PLM, MES and ERP integrations mature. The bottleneck is not having data; it is acting on data — translating instrumentation telemetry into shippable AI use cases that survive a procurement cycle. Pilot purgatory is a tooling and operating-model problem, not a data problem.
Implications for executives

Five things to do this quarter.

Each implication below is calibrated to where the data actually points. They are ordered by leverage: action one moves the needle the most, action five matters but can be sequenced after the others. None of them require a budget cycle to start.

  1. 01

    Name an AI governance owner before the August 2026 EU AI Act enforcement window opens.

    Only 31% of mid-market companies have a named owner today. The 69% who do not are running a regulatory clock that expires in roughly four months. The owner does not need to be a Chief AI Officer in title — what matters is the named individual who can be summoned to defend the AI inventory, the risk classification, and the documented oversight process. Start with the Governance & ethics dimension deep-dive to see exactly what the 12-item readiness checklist looks like.

  2. 02

    Pick one pilot to push to production this quarter — and set its kill criteria first.

    The 22% pilots-to-production rate is the single biggest gap between value-capturing companies and the rest. Most stalled pilots fail not because the model is wrong but because nobody agreed in advance what 'good enough to ship' looks like. Before the next sprint, write down the three numbers that define success and the one number that triggers a rollback. The full pattern is documented in the manufacturing pilot purgatory analysis.

  3. 03

    Audit your AI use cases against the four EU AI Act risk tiers — and document the answer.

    Annex III of the Act lists eight categories of high-risk AI use. The Act does not care whether your use case is internal or customer-facing; it cares whether the function falls within those categories. A documented one-page audit per use case is what regulators will ask for first — and what enterprise customers will start requiring in supplier-onboarding questionnaires from Q3 2026. The practical 12-item checklist mapped to specific Articles is published as a separate insights piece.

  4. 04

    Stop benchmarking against your industry. Start benchmarking against your operating model.

    The sector medians published in this report exist to help you place your score, not to define your ceiling. The leading mid-market firms in any sector are operating two to three bands above their industry median because they made specific operating-model choices — a unified data layer, a model-card discipline, a quarterly review cadence. The 90-Day AI Value Sprint engagement is built around exactly this kind of gap-closing. See how the Sprint is structured.

  5. 05

    Re-take the Index in October 2026 and compare your score against your Q2 baseline.

    A score in isolation is a snapshot; a score against your own baseline is a trajectory. The companies that move the most between two retakes share a common pattern — they treat the dimension breakdown as a rolling diagnostic, not a one-off result. For boards that want senior AI leadership without a full-time hire, the Fractional CAIO programme runs the quarterly review cadence end-to-end.

Appendix

Sources, cohort, and reproduction notes.

Data sources

  • MIT Sloan / BCG 2024 — Expanding AI's Impact with Organizational Learning The longest-running enterprise AI longitudinal study, tracking the same set of behaviours and outcome metrics annually since 2017. Primary anchor for the Strategy, Data, Tooling, and ROI dimension medians, and for the 22% pilots-to-production figure.
  • Stanford AI Index Report 2024 Stanford Institute for Human-Centered AI's annual reference on global AI adoption, talent, and investment. Primary anchor for the People & capability dimension and for European mid-market talent-density comparisons.
  • Capgemini Research Institute — EU AI Act Readiness Survey, Q4 2024 The most rigorous public survey on European companies' AI Act preparation, covering governance ownership, risk classification practice, and compliance-spend allocation. Primary anchor for the Governance dimension and for the 31% named-owner figure.
  • EU AI Act — Regulation (EU) 2024/1689 The full regulatory text and supporting material from the European Commission. Authoritative source for risk classification, prohibited practices, transparency duties, and the August 2026 enforcement timeline used throughout this report.
  • ISO/IEC 42001:2023 — AI Management Systems The first international management-system standard for AI, providing the structural anchor for the Strategy (Clause 5), Tooling (Clause 8), and ROI (Clause 9) dimensions. Used to validate that the Index covers the full management-system surface.

Cohort and sample-size note

The Q2 2026 published edition is built on responses collected from 1 January to 31 March 2026 — 437 valid completions out of 612 starts (71.4% completion rate). The blend uses two inputs: the published-research anchors above (with samples of many thousands of respondents each), and the Q1 2026 Arqmetrica Index cohort. The full cohort breakdown by industry, country, employee band, and respondent role is published at /the-index/methodology. Per-industry cells with N<30 (Energy & Utilities, Public Sector) are reported with widened confidence intervals and should be read with caution; the live-response weighting will rise across 2026 as N accumulates.

Methodology and reproduction

The full Index methodology — six weighted dimensions, the 24 questions, the four-stage maturity ladder, the open-source scoring formula — is published at arqmetrica.com/the-index/methodology. Every figure in this report can be reproduced from the inputs cited above using the formulas documented there. The scoring code lives as TypeScript in src/index/scoring.ts in the public repository. There are no hidden adjustments and no proprietary multipliers.

Licence

Creative Commons Attribution 4.0 International (CC BY 4.0) This report and the underlying dataset are published under CC BY 4.0. You may republish, quote, build derivative analyses, or train models on it — with attribution to Arqmetrica and a link to the source URL. We actively encourage citation; the report is built to be one.

Suggested citation: Arqmetrica (2026). State of European Mid-Market AI — Q2 2026 Edition. arqmetrica.com/the-index/report/2026-q2.