
At the end of Trump’s First 100 Days in office, polling by Prolific revealed skepticism among respondents about who stands to benefit from sustained investment in AI technology. 69.7% believe AI investments will primarily benefit large corporations, not everyday Americans, and 50.9% are not confident in Trump’s ability to guide AI policy over the next four years. The picture is similar in the UK, where a March survey saw over 75% of respondents say the government or regulators should oversee AI safety, rather than private companies alone.
The pressing question facing governments and businesses is how to stay at the top of the pack as competition hots up, keeping the interests of the public at heart while creating lucrative regulatory environments for domestic AI movers and shakers.
Public distrust is deeply rooted, despite a flurry of executive orders and policy moves marketed by the administration under the banner of ‘Artificial Intelligence for the American People’. And this distrust stems from the perception that policy primarily serves corporate interests rather than individuals.
When compared to Biden’s (now repealed) executive order, called “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, Trump’s orders prioritise deregulation and view the proliferation of AI as a route to increasing America’s competitiveness on the global stage. This was borne out last month when Trump rescinded the AI diffusion rule, allowing the export of advanced tech to new friends in the Middle East.
This bold prioritisation of corporate interests is nothing new, but what underlies moves like these is a troubling disregard for ethics in AI. The administration has shifted focus from real concerns around bias, discrimination, and the impact of AI on the job market – possibly due to a lack of credible expertise on its AI regulatory panel. However, the CEO of Anthropic has given warnings to US government and companies, including that AI could wipe out half of all entry-level white-collar jobs – asking it to stop ‘sugar coating’ the situation.
This debate shouldn’t be viewed as partisan – the breadth and ambition of Biden’s executive order was met with some praise in 2023, but commentators like Daniel Ho, a professor of law and political science who served on the National Artificial Intelligence Advisory Commission, warned of a shortfall in expertise needed to meet the order’s stipulations. “The one caution here is that if you don’t have the human capital and, particularly, forms of technical expertise, it may be difficult to get these kinds of requirements implemented consistently and expeditiously” (Scientific American).
Who should lead AI governance to win public trust?
Those at the frontline of model development, actively building the AI of the future and witnessing its whims and vicissitudes first hand, should of course be central to AI governance. This is for the simple reason that, together, they have the greatest ability, resources, and contextual awareness needed to develop and implement appropriate safeguards.
Private companies have top-tier knowledge, but in an industry moving at breakneck pace where the prize is a dominant role in the AI future, they don’t always have the incentive to self-regulate at the expense of agility and competitive advantage. On the other hand, those on the academic side of AI development and regulation have both the domain expertise and freedom from corporate motivations to devote the necessary time to safety and ethics in the public interest.
Such guardrails and boundaries are at risk if too much power is given to the private sector. Model builders will prioritise market cap and shareholder value over considerations like representativeness, which are crucial to the sustainable and ethical development of artificial intelligence in the long term.
A hybrid model of governance
This is precisely why bodies such as the AI Security Institute and its international counterparts – which conduct research to evaluate AI safety risks using experts from academia and industry – should have an outsized influence. The UK Labour government’s legislation requiring companies to submit their LLMs to the AISI for testing represents exactly this kind of appropriate oversight, with governments directing AI policy not in isolation, but on the advice of independent and valid research bodies.
The UK government’s signalling that they will establish AISI as a statutory body represents a solid step forward in ensuring the right interests guide our approach to AI, and this framework should be replicated internationally.
However, in a prime example of how regulatory coordination challenges can create delays, the bill was postponed this week to align with moves in the US and ensure the UK remains competitive, pushing completion back to 2026. Another stumbling block is a situation where independent bodies working in conjunction with governments become overly prescriptive, progress will slow in an industry where speed is of the essence – and where winners will quickly outstrip laggards on the global stage.
Striking a balance between caution and competitiveness
There are measured ways to go about this. As part of his grand deregulatory frenzy, Trump has removed most of the guardrails for AI and created a large free market. But these moves are not grounded in research showing them to be the best course of action, taking their lead from companies as opposed to researchers in the field. An uninformed government should have no role in AI regulation and governance.
How other nations should respond to developments in the US is a question with no one answer. Should we deregulate to keep pace, sacrificing control to private companies? Or maintain a steady course to ensure a safer and more sustainable AI future, with the risk of companies leaving for the comparably loose ecosystem in the US?
This fine line between governance and regulation is a critical question for the future of AI and the stakes are high. It boils down to a winner-takes-all game: the first country to develop next generation Artificial General Intelligence (AGI), for example, will find itself with unparalleled influence overnight.
Temperature checks to build trust
One thing is clear. We can’t lose sight of people and what they stand to lose and gain from AI. Understandably, we can expect public backlash and continued distrust, particularly around the potential for job losses through advances in AI, and we must consider natural human inclination to neglect the potential benefits of AI.
In addition to enacting thoughtful regulation, we need regular temperature checks on sentiment, at least monthly, which could take a number of forms, including targeted polling of public opinion around AI governance. We shouldn’t rule out more ambitious approaches like citizens’ assemblies. A good example of this was in late 2024, where a citizens’ jury by IPPOSI (Irish Platform for Patient Organisations, Science and Industry) convened 24 jurors to gauge the public’s sentiment and recommendation on ethical use of AI in healthcare.
In the end, the IPPOSI panel called for strong regulation, transparent oversight and robust data security, alongside a separate watchdog. These findings, mirrored in Prolific’s own polling, might point the way to maintaining public trust when it comes to AI governance writ large: listening to those at the frontline and regulating AI for people, not just business. Though as international competition heats up, remaining agile will be make or break.