Future of AIAI

“An AI maker, not taker”: Beyond the rhetoric 

By Rich Cownley, Partner at Yonder Consulting

Keir Starmer’s recent pledge that the UK will become “an AI maker, not taker” marks an ambitious step forward in the country’s approach to artificial intelligence. The statement signals a clear desire for Britain to assert leadership in the global AI race, rather than passively adopting innovations developed elsewhere. But while strong rhetoric from political leaders is an important starting point, turning those words into meaningful outcomes will require swift, coordinated and strategic action.

Public attitudes towards AI are cautiously optimistic, but the pace of growing confidence is slow. While the benefits of AI are widely acknowledged, there’s a notable gap between the speed of technological development and the public’s trust in its use.

Setting the pace of play

Currently, the global AI agenda is dominated by large tech companies, typically based in the United States or China. These firms are setting the pace of innovation, leaving governments around the world – including the UK – rushing to catch up in areas like regulation, ethical guidance and public engagement. Despite some strong research institutions and promising startups, the UK is not yet viewed as a global leader in AI development. Changing this narrative is the only way to keep our heads above water in the AI arms race.

The rate of change is set to increase exponentially too. Only a few years ago AI was a fringe technological pursuit, now it’s a major part of every modern business’s CX and logistics strategy. And with OpenAI announcing just the other day an 80% decrease in operating costs for their most powerful model, barriers to entry are tumbling. Soon every business, small and large will be able to adopt an AI solution at minimal additional expenditure to better serve their customers.

A major part of the effort to bring the public onside lies with the government. Our recent Yonder Omnibus polling reveals that 83% of people believe it is crucial to establish clear ethical guidelines and regulations for AI development and deployment. Meanwhile, 79% agree that governments – not tech companies alone – should be responsible for setting the rules and limiting the risks associated with AI. This shows a clear public consensus: democratic institutions must take the lead in shaping how AI is used and integrated across society.

Public opinion is the limiting factor

Among the public’s most pressing concerns is the misuse of AI, with 79% of people agreeing that it’s becoming more difficult to identify fake news and misinformation, and they’re concerned about AI systems’ potential misuse of personal data . From deepfakes to algorithmically-amplified conspiracy theories, AI is already contributing to the erosion of trust in online information and data safety. These challenges are not hypothetical, they are present and growing. Tackling them head-on is not just about risk mitigation, it’s about ensuring that AI can be deployed in ways that genuinely serve the public interest.

Bringing the people on board

To prevent a future where innovation races ahead while public trust lags far behind, the UK must pursue a coordinated and transparent approach with the government spearheading its rollout. That means embedding ethical principles – like transparency, clarity in training data, and consistent human oversight – into both regulation and product design. But it also requires proactive investment in public-facing AI services, like the NHS, HMRC and in local councils, ensuring citizens see real, tangible benefits from the technology.

The rhetoric from Starmer is bold. Whether it becomes more than just a soundbite will depend on whether government, business, and civil society can work together to balance innovation with accountability. The path is there, but walking it will demand more than words, it needs leadership and action.

Author

Related Articles

Back to top button