President Donald Trump’s recent predictable action in revoking Biden’s Executive Order on AI is bad news for the world, particularly if we wish to encourage responsible AI practices globally and build public trust towards rapid AI adoption.
On January 20, 2025, President Donald Trump rescinded Executive Order 14110, which had been signed by former President Joe Biden in October 2023. This order had established comprehensive safety guidelines for the development and deployment of artificial intelligence (AI) technologies. It mandated that developers of AI systems posing risks to national security, the economy, public health, or safety share their safety test results with the U.S. government before public release. Additionally, it directed the National Institute of Standards and Technology to create safety testing standards and tasked federal agencies with assessing potential risks posed by AI.
President Trump’s repeal of this executive order reflects a shift in policy towards prioritizing rapid AI innovation over regulatory oversight. He emphasized the development of new AI tools and signed multiple new executive orders to promote this agenda. This move has been met with mixed reactions. Proponents argue that reducing regulatory barriers can accelerate technological advancements and maintain the United States’ competitive edge in AI. However, critics express concern that removing these safeguards may lead to unchecked AI development, potentially compromising public safety, ethical standards, and global trust in AI technologies.
The revocation of Biden’s executive order has also had economic implications. Following the repeal, companies like Nvidia, a leading AI chipmaker, experienced a rise in stock value, indicating market optimism about a more relaxed regulatory environment fostering innovation. Nonetheless, this deregulation raises concerns about the potential for mass automation and its impact on employment, as well as the ethical use of AI. In summary, while the intention behind rescinding the executive order may be to promote rapid AI development, it is crucial to balance innovation with responsible practices. Establishing and maintaining public trust in AI requires careful consideration of safety, ethical standards, and the potential societal impacts of these technologies.
UK can leverage this historic moment to execute its own responsible AI regulation by charting a middle ground between the US (with no regulation on AI) and the EU (with its stringent EU AI Act) to position itself as a global leader in responsible AI. By implementing a responsible, fair, balanced, innovation-friendly and coherent regulatory framework, the UK Government can successfully execute its recently launched AI Opportunities Action Plan to reap long-term economic prosperity for the British Public whilst protecting the greater public good.