Data foundations
The quality, governance, and accessibility of organisational data for AI use cases.
What we ask
The four questions in this dimension, with the four ordinal options and their fixed scores. Identical wording to what you will see in the assessment.
1. When you need to use customer data for an AI model, how easy is it to access cleanly?
Think about the typical journey from "we want to try this" to "the model has the right data". Be honest, not flattering.
| Option | Response | Score |
|---|---|---|
| a | Already in a clean, governed warehouse with documented schemas Hours to serve a request | 100 |
| b | Lives across 3–5 systems; needs joining each time Days to a couple of weeks per request | 67 |
| c | Quality is inconsistent; cleaned case-by-case Weeks to a month per request | 33 |
| d | We don't have reliable customer data for AI yet | 0 |
2. Is data lineage documented and queryable for the data feeding your AI systems?
Lineage = "for any field used by an AI model, can you trace it back to the source system, owner, and last refresh time?"
| Option | Response | Score |
|---|---|---|
| a | Yes — automated lineage tooling, queryable in seconds | 100 |
| b | Documented for critical pipelines, manual elsewhere | 67 |
| c | Tribal knowledge — engineers know but it is not written | 33 |
| d | No lineage tracking | 0 |
3. How is personal data flagged and handled when used in AI systems?
GDPR-relevant personal data has special obligations under the EU AI Act. This is about whether your systems know which fields are sensitive.
| Option | Response | Score |
|---|---|---|
| a | Field-level tagging + automated DPIA workflow when AI touches PII | 100 |
| b | Manual DPIA process triggered by request | 67 |
| c | Reviewed only when legal escalates a concern | 33 |
| d | No formal handling of PII in AI workflows | 0 |
4. How representative is your training/retrieval data of the populations your AI systems will serve?
Bias often comes from skewed source data. Have you formally measured representation gaps?
| Option | Response | Score |
|---|---|---|
| a | Documented bias audit per major use case, refreshed quarterly | 100 |
| b | Spot-checks before deployment; no formal cadence | 67 |
| c | Aware of the risk but no formal process | 33 |
| d | Have not considered this | 0 |
Ready to see your full score?
Take the full assessment to see your score across all six dimensions, your peer benchmark, and your three highest-leverage moves.
Take the assessment →