
Many column inches have been dedicated to speculation over the existence of an AI bubble in recent months. Pundits have pondered the potential causes and impact of a bubble bursting, and markets have dipped and spiked as confidence ebbed and flowed.
Despite all this speculation, however, one area has been notably absent from the commentary: that of trust in AI itself.
Artificial intelligence has never been more embedded in the fabric of everyday life. Both at work and at home, AI systems increasingly shape people’s decisions. And, with increasing numbers of people using tools like ChatGPT and Copilot as the 2025 search engine, there is an important question over the quality and safety of the data being used to train AI and serve up answers.
As adoption rises, so too does a potential uncomfortable truth: trust in AI is not keeping pace.
Brits demand trustworthy data
New research, commissioned by Milestone Systems and undertaken by UK research firm Obsurvant, recently revealed that nearly nine in ten Brits (88%) who regularly use AI at work or in day-to-day life are concerned that the data on which AI is trained could be unlicensed, inaccurate and not respectful of the privacy of individuals.
A striking 96% of respondents say that AI tools must respect individuals’ privacy. Meanwhile, 92% believe their AI systems should avoid infringing copyright. Yet despite these strong views, only one in four users knows where the data behind their AI tools actually comes from.
This gap between expectation and transparency presents both a challenge and an opportunity.
This is surprising and urgent backlash among the people who rely on AI the most.
According to the survey, nearly nine in ten Brits who regularly use AI are worried that models may be trained on unlicensed, inaccurate, or privacy-violating data. These are not occasional users; they represent the vanguard of AI-driven productivity. And they are signalling a crisis of confidence.
The implications are far-reaching
The implications are significant. More than half (51%) of these frequent users say they intend to reduce their use of AI tools in the future, and 15% say they plan to stop using them entirely. Given that broadscale adoption is still in its early phases, these numbers could mark the beginning of a troubling trend: disillusionment with AI before society has even unlocked its full potential.
The public knows what it wants from AI – and it’s not compromise. The survey underscores the remarkably high standards British users are placing on AI systems. Across the board, respondents expect technology to be:
- Accurate (97%)
- Unbiased (94%)
- Free from copyright infringement, illegal images, abusive language, or hate speech (93%)
Real-world impact of poor data
Using poor quality data to train AI models means that results can never be wholly trustworthy. This has real-world impact for the very audiences that our many AI tools purport to help.
AI is only as good as the data it is trained on, so the quality of the data is very important.
One example is AI automation. If there are too many false positives, where AI makes mistakes, then manual control is still needed, and users will not get the benefits and efficiency of AI automation.
Errors caused by poorly trained AI can result in various mistakes, such as triggering false alarms about traffic accidents, incorrectly reporting patient falls in hospitals or wrongly identifying abandoned suitcases in airports.
These examples underscore a larger point: the sophistication of a model cannot compensate for flawed or improperly sourced data.
A roadmap to improving trust
Far from being passive recipients of algorithmic decisions, today’s users are informed and discerning. They know that AI’s performance and integrity depend fundamentally on the data it is trained on, and they are increasingly sceptical that developers are doing enough to ensure that future data is lawful, high-quality, and responsibly sourced.
People rightly expect technology to be safe, fair and responsible. Those expectations should be met, not just with words, but with actions as well. Thought leadership in AI has often centred on model architecture, scaling laws, or computational power. But users’ priorities point elsewhere: the real battleground for trust is data.
Without clear visibility into data provenance- and without consistent assurances that training datasets meet regulatory, responsibility and quality benchmarks – users feel understandably uneasy. They suspect that much of today’s AI has been trained on broad internet scrapes, where data quality is uneven, licensing is ambiguous, and private information may be inadvertently ingested.
And, while the appetite for clarity and reassurance suggests an emerging value proposition for AI developers and enterprises, the reality of ripping up the rule book and bringing this into play is not an easy path to tread.
There’s a growing need among businesses for compliant, quality data to better train AI, and it’s interesting to see this trend carry over to the general public.
In other words, users are not rejecting AI – but they are rejecting the idea that AI should be trained on anything less than trustworthy data.
Among AI developers’ there is also a growing awareness of the so-called ‘data lineage’ of the massive amounts of data on which AI models are trained. Data lineage is the process of tracking the flow of data from its original source to its final destination, ensuring transparency and improving data quality.
It’s my experience, that we can put trust and quality at the heart of AI training. This is the direction of travel for AI in 2026 and beyond – as we all focus on building better solutions, in every sense of the word, responsibility must be at the heart of our approach.



