RegulationEthics

‘Divided by a common language’ – tackling AI ethics and governance in the UK and US

The initial fear has subsided somewhat about a robotic takeover and has been replaced by discussion about the ethics that will accompany the integration of AI into everyday business structures.

Pillars and principles

Emerging regulatory frameworks in the EU and US around AI ethics and governance are being handled, as anyone could have predicted, wildly differently.  Governments around the world are struggling to combine safeguarding and regulation with the development and innovation that will benefit their economies.  The United Nations drafted ten principles in 2022, in response to the ethical challenges raised by the increasing preponderance of AI and these have informed the approach of responsible governments. Building  AI integration around these central pillars ensures an ethical and solid foundation.

AI in the EU  

In Europe, the implementation of AI is governed by the European Union’s AI Act that was introduced in April 2021 to establish a unified legal framework for AI across the EU.  The Act categorises AI systems into different risk levels with the emphasis on public safety, ethical considerations, and data privacy.  The National AI Strategy emphasises regulatory ‘flexibility’ with a more outcome-focused and less technology-specific remit. The idea here is that it allows the UK to adapt as AI technologies evolve, rather than locking itself into rigid structures.

The government is also encouraging  industries to establish their own standards and practices in a move towards greater self-regulation. Presumably the hope here is that this will provide sufficient freedom for technological growth without leaving itself open to criticisms of negligence when it comes to safety and data privacy.

Sadly though the Act itself contains vague elements which are open to interpretation by the reader, particularly in the area of ‘unacceptable risks’.  There is a real danger here that the main beneficiaries for this Act in its current form will be corporate lawyers rather than the public who need to be protected within the EU’s values of dignity, freedom, and non-discrimination.

AI in the US

Over the water, Donald Trump is continuing to push his laissez-faire attitude toward AI regulation that he began in his previous presidency, prioritising economic growth and innovation over regulation and pressuring other developed nations to follow suit if they wish to remain competitive.  His aim, as cited in his American AI Initiative, is to ensure the US remains a global leader in AI through investing in research, fostering collaboration between the government and private sectors, and promoting AI education and workforce development. There is fear here among regulatory bodies that he is being overly influenced by executives from large tech projects and that a hurried removal of what they, and therefore the President, dismiss as ‘red tape’ leave AI untethered from safety and privacy regulations.

UK steps forward

The UK government offers a more positive picture in terms of active engagement in forward thinking plans around AI without sacrificing safety or privacy. One of the pillars of AI development will be regulatory leadership, and the crafting of agile AI regulations that encourage innovation while protecting societal values.The government’s national renewal plans, which were announced in January 2025 have been designed to generate growth in the economy, improve productivity and enhance the quality of life for citizens.  The key here is that the government is prepared to embrace AI across public, private, healthcare and education sectors as no area will be immune to AI and we can all benefit if we remain open minded and ambitious in our plans and execution.

Plan for Change in the UK

The government’s Plan for Change, which aims to take a start-up approach to funding new AI technology, is impressive in its intent, pace and positioning.  Public-Private collaborations need to strengthen partnerships between government, AI startups, and tech giants to ensure AI applications serve both economic and social needs.  At the same time, however,AI adoption should not be rushed at the expense of responsibility and ethics.  After all, the UK positioned itself as a global leader in AI safety by hosting the first AI Safety Summit in 2023, alongside the launch of its AI Safety Institute to research risks and create governance frameworks and we must continue that approach.

Navigating AI ethics

Navigating the risks of AI will be difficult, tedious and long-winded for governments, and weighing up the risks of dense legislation such as the EU AI act will only further exacerbate this. While the intent behind safeguarding is good the application is poor, much like the GDPR act, and broader thinking is required.

Yes the US may boast a more fleet-footed approach but is that likely to bring early gains but at what cost? By removing safeguards such as those protecting privacy, security and finances, such as artist copyright, any societal and economic gains may quickly be lost.  Strong leadership and ethical vision has never been so important and that is definitely something that requires human qualities.

Author

Related Articles

Back to top button