Future of AI

AI regulation: Why it needs to come sooner rather than later

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

AI development is happening at breakneck speed, underpinned by massive investments from Big Tech companies such as Microsoft, Google and Amazon. The consequence of this escalating AI arms race is that commercial applications are reaching consumers without sufficient information on the data upon which these AI models have been trained. Witness the speed with which generative AI products have been rolled out by Big Tech in the past six months. Barely a week goes by without a new launch. 

For regulators around the world, this has set alarm bells ringing. The nascent AI industry is largely self-regulated, but as it grows, the parallels are becoming clear with another technological phenomenon of recent decades: social media. Governments are desperate to learn the lessons from the light-touch approach to the regulation of social media in its early years, which led to uninhibited growth and ultimately exposed users to potentially harmful content. 

But AI regulation need not be an obstruction to innovation. In order to build and sustain public trust – which will ultimately be beneficial to the industry as it seeks to implement greater AI adoption into technology – comprehensive but flexible regulation is the only way to build a safe and transparent system that aligns with our human values.

“Break it first, fix it later”

The competition between Big Tech is hotting up as powerful new tools are developed at rapid speed and hastily released to the market. Companies such as Microsoft, Google and Amazon are currently pushing development teams to their limits as they race to be first to market and gain traction in an industry that is predicted to contribute up to $15.7 trillion to the global economy in 2030.

What is driving this “break it first, fix it later” approach from Big Tech? For a clue, consider that Google recently labeled ChatGPT’s launch a “code red” threat to its search engine dominance. This makes regulation all the more necessary when commercial interest potentially outweighs ethical and safety considerations. 

Yet with consumer demand driving the AI explosion, there is no sign of the companies slowing down – or of their investors taking their foot off the brakes either.  

Self-regulation – not an option

When it comes to a digital industry self-regulating, the example of social media platforms serves as a prescient warning. Over the past decade, as the use of social media has proliferated, so too has the societal impact of these platforms. 

In recent years it has become apparent that social media failed to detect and prevent harmful or misleading content from reaching users. One study on the political bias on social media found that users are affected by the “echo-chamber characteristics” of the social network which increases exposure to partisan and low-credibility news sources.

Self-regulation by social media companies in competition with each other for users has not been sufficient to protect users and we’re still grappling with those consequences today as lawmakers try to gain some kind of control over what these platforms are responsible for publishing. When regulators realised there was a problem, it was already too late. 

How can we strengthen regulation?

Policymakers around the world are proposing regulations to ensure that AI adoption evolves in a safe and responsible way. The European Union (EU) has published a first draft of its AI Act and a blueprint for an AI Bill of Rights has been published by the White House. The hope from both lawmakers and businesses is that regulation can be implemented without being an obstruction to innovation. 

For it to work, there must be greater education about what AI is and how companies are adopting the technology. If those in a position of power to create AI regulations don’t fully comprehend this fast-moving landscape, then there is a risk to AI innovation. As demonstrated by politicians – most notably in the US – who attempted to hold social media giants such as Facebook to account, an underlying lack of knowledge of the technology can lead to damaging decisions. 

A repeat of that pattern could prevent the true positive impact of AI on human populations from being realised. It is for this reason that policymakers must engage with technical experts to draft laws and regulations. Any new laws must strengthen trust in AI without unnecessarily hindering the development of small companies that risk being overburdened and overregulated. 

Organisations must be ready for any increase in regulation around AI. It’s vital that businesses protect their data and ensure that any data being used to customise or fine-tune AI models is compliant. They also need to apply safety measures that allow usage of their data for AI purposes – therefore it is a priority for businesses to not give up control of their data and keep it protected behind their own firewalls. Federated learning is a technology that allows AI models to be trained on distributed datasets, leaving the data where it resides. This enables organisations to access complementary datasets to collaboratively train AI models while retaining their competitive edge. 

We are still in the very early stages of understanding the power of AI. As the industry continues to grow, governments must work hand in hand with companies and stakeholders across society to build a robust regulatory framework. As AI adoption gathers momentum, it is essential we get in front of this generational technology and regulate it in a way that works for everyone – before it is too late. 

Author

  • Robin Rohm

    Robin Röhm is co-founder and CEO of Apheris. The platform enables fast-growing businesses to build Artificial Intelligence and other data applications across organizational and geographical boundaries, without sacrificing data privacy or intellectual property. Robin has founded three start-ups, worked in financial services, and has degrees in medicine, philosophy, and mathematics.

Related Articles

Back to top button