If an organisation’s AI is a black box, it may soon have a problem. As models and, increasingly, AI agents play a growing role in business and society, their outputs and decisions are having larger consequences – exposing them to greater scrutiny.
Explainability is now a major challenge for organisations as they move from AI experimentation to implementation. Businesses are already facing increased pressure to understand the outputs of models and explain their decisions. This is only the beginning of the story. Explainable AI is becoming a regulatory mandate in parts of the world as governments legislate to help citizens understand the decisions machines make about them.
Even in nations which opt for a lighter touch, consumers and shareholders are showing a growing awareness of “algorithmic fairness” and expect companies to tell them how their AI models work. Globally, at least 69 countries have proposed more than 1,000 AI-related policy initiatives and legal frameworks to address public concerns about AI safety and governance. The EU AI Act has entered force in Europe, while the UK has proposed its AI Action Plan. Businesses face a challenge in complying with these evolving expectations. To explain AI models, they must first ensure their processes and protocols used in preparing their foundational data for AI are in perfect shape, because they are about to undergo some serious stress testing.
The drive for better data
AI models are only as good as the data they are trained on. Take, for example, an LLM that is designed to predict consumers’ future behaviour. Useful information to help it perform this task could include the location of a person’s home, indicated by a postcode.
One small error in this data could distort location data, leading to inaccurate assumptions. In contrast, accurate data that is free from error or bias helps AI make accurate assumptions about someone’s age, income class, life expectancy and more – ultimately driving fairer and more accurate decisions.
As regulatory pressure grows, getting this information wrong could come with heavy penalties. If someone is refused a loan or quoted with higher-than-expected insurance costs, they will want to know why – and rules about explainable AI will force companies to provide this information.
Failing to build a strong data foundation will, therefore, not only lead to unhappy customers but potentially fines or other enforcement action in future. Explainable AI is becoming a necessity, not a trend meaning that organisations must get their data strategies right today to avoid trouble tomorrow. This accountability is at the heart of the EU AI Act. While the bulk of compliance requirements won’t take effect until mid-next year, the ban on certain prohibited AI practices and the obligations to promote
AI literacy, began on 2 February 2025. This means that AI providers and deployers must ensure their teams have the AI literacy needed to operate these systems responsibly—or risk increasing scrutiny and potential penalties.
The great data challenge
Data is the fuel of AI models and agents – as well as a major potential obstacle. A recent study of 600 data leaders around the world, including chief data officers, found that 43% of businesses report that data quality, completeness, and readiness are the biggest blockers when rolling out AI. Nearly all (97%) reported issues such as incomplete inputs (53%), licensing issues (50%), and unauthorised use of sensitive data (44%).
Without a solid data management foundation, AI is incomprehensible and unreliable. Businesses need to fully understand the sources of structured and unstructured data feeding their AI models.
AI relies on vast amounts of unstructured and fast-changing data, leading to challenges like data drifting, in which formats and content fluctuate over time, becoming less reliable. Inconsistent data formats from multiple sources complicate AI implementation, with many enterprises struggling with low data sophistication. Additionally, a shortage of professionals who understand both AI and industry-specific nuances hampers development.
Problems with the trustworthiness of data make AI explainability much more difficult. If organisations can’t trust their data, then they can’t trust their AI models – let alone explain their decisions.
To ensure data is up to the job of powering AI models and agents, the following questions need to be answered:
- Is the data used to train the model coming from the right systems?
- Have we removed personally identifiable information and followed all data and privacy regulations?
- Are we transparent, and can we prove the lineage of the data the model uses?
- Can we document our data processes and be ready to show that the data has no bias?
How to make AI explainable
The key to overcoming explainability hurdles lies in strengthening data foundations, leveraging cloud-based tools, and investing in governance, training, and high-quality datasets to maximise AI’s potential at scale.
Organisations must follow strict data management principles to ensure all information is holistic, accurate, up-to-date, accessible and protected from external threats.
Data literacy is also critical. Employees should know how to apply data management best practices and understand the importance of explainability, as well as how to achieve it. When businesses have confidence in their data, they are one step closer to explaining the insights and outputs of their AI models.
Effective and explainable AI deployment requires an intelligent approach to data management so that organisations can access real-time, trusted, relevant data – no matter where it resides. Metadata-driven lineage and traceability are essential for maintaining reliability by ensuring transparency and accountability in data processing.
Ultimately, AI models can only succeed when they are built on trusted and well-managed data. Building that foundation can no longer be put off. AI will need to be explainable, which means organisations must start understanding and properly preparing their data as soon as possible.