Artificial intelligence is evolving at a rapid pace and governments around the world are taking well defined approaches to the way this technology should be governed.
The European Union, USA, and China are forming their strategies based on their political systems and the regulatory environments that are already in place.
However, it is clear that regulating and governing AI is a huge challenge and one that needs to be embraced otherwise governments risk falling behind other nations that do.
As a global supply chain procurement company to governments and large corporates, OCI monitors jurisdiction-specific AI regulations and recognises AI to be something that could have a massive impact on our sector.
EUROPEAN UNION
The EU, for example, seems to have focused their stance on regulation, ethical considerations, and human rights. It is currently working on the Artificial Intelligence Act (AI Act) in one of the first major attempts to create a comprehensive regulatory framework for AI globally. They have broken these down into four so-called ‘risk’ categories – all of which need consideration.
First of all, the EU is looking at what constitutes an ‘unacceptable risk’ when using AI technology such as where it could be used for surveillance on a sovereignty. Here the EU, naturally, wants potential AI use to be banned.
Then there is ‘high risk’ which includes things like biometric identification, critical infrastructure, and healthcare – all of which the EU says must be subject to strict requirements.
Thirdly, the EU has looked at ‘limited risk’ which means AI systems like chatbots which must obey transparency requirements. Lastly, there is ‘minimal risk’ for video games, for example, which face a limited amount – if any – of regulation.
The EU has also published Ethical Guidelines for Trustworthy AI which considers things like fairness, accountability, transparency, and privacy protection. This is because the EU wants AI to be centred around humans and keep humans in charge. GDPR also applies to AI systems especially when looking at data privacy of individuals and when it is used to process this information.
The EU has already put aside substantial funding to AI research to address these challenges socially so it will be interesting to see what else they decide to do around governance over the coming year. Overall, the AI Act wants to ensure AI systems are ‘developed and deployed in a way that respects European values, including privacy and individual rights.
UNITED STATES
The USA has taken a more relaxed approach to AI in my opinion and is more focused on the dollar, employing it to build technological and business growth while balancing potential risks.
There is no AI law and, so far, the US has relied on the FDA for guidance in healthcare and FTC for consumer protection rules. The OECD has also released ‘principles on AI’ which focuses on human rights, fairness, transparency and accountability.
Back in 2020, the US set up the National AI Initiative Act (2020) and this put together a framework for coordinating AI research, development, and policy across different federal agencies.
The aim of this was to focus on AI leadership, building a skilled AI workforce, and promoting international cooperation. But the US, basically, unlike the EU it seems, does not want regulation to stifle the technological advancement of AI.
The only real area we can see that the US is concerned about is when AI intersects with the military (i.e., how it could be used in defence technology, autonomous weapons systems, and cybersecurity – the latter of which is becoming an increasing problem).
CHINA
China’s approach to AI governance is influenced by its political system and quest to become a global leader in the field by 2030. In 2017, the country released its ‘next generation’ AI plan which emphasises building AI as a ‘national priority’. It has made significant investments in research, infrastructure, and industry applications.
China is already seen as a global leader in the use of AI for surveillance. It uses AI-driven systems for monitoring social behaviour, law enforcement, and maintaining public order.
It uses widespread big data analytics and facial recognition technology, raising concerns about privacy and civil liberties in the West. But these align with China’s political and social system which is determined to maintain social stability. The country has issued ethical guidelines for AI but how much weight is given to these is up for debate.
China is keen to integrate AI in industrial processes and has launched programmes in healthcare, transportation, and manufacturing. It is also providing incentives to businesses and academic institutions to bolster AI research and development.
In summary, the key differences between the EU, USA and China are complex. The EU wants quite a heavy regulatory approach so seems more cautious by the advancement of AI. The USA prefers less rules and a focus on innovation, using the technology to build business. Meanwhile, China advocates for state control, aligning AI with national interests – especially when it comes to surveillance and industry.