This month (August 2024) will have seen two major developments in the AI regulation space:
- The implementation of the EU AI Act from 1st August
- The UK government’s forecasted AI Bill
A common thread we are seeing in both of these regulatory frameworks is a targeted focus on big tech companies, such as Microsoft, Google, and Amazon, alongside the leading LLM developers including OpenAI, Anthropic, and Meta.
This focus isn’t especially surprising. The focus on big tech companies is arguably well-justified given that they are generally the ones pioneering the more groundbreaking and radical technical developments in AI that are most likely to pose a real (or at least perceived) threat to society.
In support of this position, our research indicates that the most popular current opinion in the tech community is that there should be a particular focus on regulating big tech companies.
Specifically, in response to the question ‘Do you think that AI regulation should focus especially on big tech companies, 57% of respondents voted ‘yes’ as opposed to only 34% who voted ‘no’, and just 9% who remained unsure.
More generally, however, the issue of AI regulation remains a controversial issue that has led to understandable concerns in the industry over the potential stifling of technological innovation. This is a particular issue of concern in countries where the fledgling tech startup ecosystem is only just starting to flourish.
In the UK, for example, the prospects of the tech startup industry are looking rocky, with the new Labour Government’s current ambiguity on upcoming AI legislation alongside their slashing of the £1.3 billion investment promised by the Conservatives for UK tech and AI projects.
In light of this funding cut, which has come as a major disappointment for innovators and investors alike, there is perhaps more need than ever for clear direction and support from the government to prevent stagnation in the industry and loss of confidence from investors.
Dan Thomson, founder and CEO of AI Replica builder, Sensay, gives voice to these concerns in a recent Computing article, arguing that “it is exactly at this time that the Government must reassure consumers and investors that UK AI is a trustworthy space.”
But what exactly should this reassurance look like?
Below, we consider whether focusing legislation on the big tech companies and foundational model providers is the right approach to developing a flourishing and ethical AI industry, both in the UK, and more broadly within Europe.
What is the main focus of the UK government’s AI Bill?
In their pre-election manifesto, Labour proposed an AI Bill that would focus on ‘the handful of companies developing the most powerful AI models’. This indicates the newly elected government’s intent to target legislation at the big tech companies and major LLM developers, even though the majority of these are based in the US.
While the Bill is not expected to be officially passed until the end of this year and could see further amendments to address broader issues of concern such as IP infringement during its consultation period of approximately two months, the focus of the Bill on big tech companies remains its primary goal.
This is clear from King Charles’ speech last month, which stated that the Labour government would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”, despite the fact that explicit mention of the AI Bill was notably absent from the speech.
Furthermore, UK Tech Secretary, Peter Kyle, told representatives from Google, Apple, and Microsoft that the Bill would remain tightly focused on regulating foundational models, in an attempt to alleviate their concerns that the legislation would grow to impact additional areas of AI.
Could the AI Bill enforce sanctions on big tech companies?
Under the forthcoming Bill, AI model developers will be legally required to allow governments to test new foundational models for risks before they are released to the public, with the provision that companies would be prohibited from developing or deploying a model at all if severe risks could not be mitigated.
The enforcement of this legislation might not make any tangible difference to the policies of AI model developers, given that most of the major developers have already signed voluntary pacts agreeing to these terms under the previous Conservative government at the AI Safety Summit in Bletchley Park last year, and the more recent Seoul Summit in May this year.
Nevertheless, Tom Allen, reporting on the AI Bill in a Computing article, points out that the legal enforcement of these agreements is being endorsed by senior government officials as a preventative measure to ensure that companies cannot back out of the agreements should they become commercially undesirable or unviable.
In this sense, the Bill aims to counteract the notorious tendency of big tech companies to modify or disregard agreements they have entered into to further their ambitions. However, it remains uncertain how effective the AI Bill will prove at actually enforcing compliance with the agreements, given the strong track record of big tech companies to surreptitiously evade legal sanctions and restrictions imposed on them for unethical practices or unconscionable conduct.
To illustrate this, just last week (August 5th), the US District Court ruled that Google had violated US antitrust law, intentionally monopolizing the internet search business by using its comparatively unmatched revenue to secure exclusive contracts to retain its dominant position as the default search provider. Penalties for the violation are still under review – but even when they are finalised, they are expected to take years to enforce. Google’s coming appeal against the ruling could also further impede the enforcement of penalties and significantly impact the severity of sanctions imposed on the company.
Additionally, as Adam Kovacevich, founder of the tech advocacy group Chamber of Progress and former Google policy director, pointed out in a comment to CNN, the consequences of Google’s sanctions are unlikely to address the systemic issue of monopolization within the tech industry. Instead, they are most likely to play into the hands of the other major competitors.
“The biggest winner from today’s ruling isn’t consumers or little tech, it’s Microsoft. Microsoft has underinvested in search for decades, but today’s ruling opens the door to a court mandate of default deals for Bing. That’s a slap in the face to consumers who chose Google because they think it’s the best.” ~ Adam Kovacevich, founder of the tech advocacy group Chamber of Progress and former Google policy director
The Google lawsuit has also been likened to the antitrust lawsuit against Microsoft back at the turn of the century. The lawsuit, although considered a landmark case in the history of US tech showdowns, had very little impact on the dominant position of Microsoft in the tech industry. Indeed, just over a year after Microsoft was ruled to have violated US antitrust for more or less the same reasons as Google, the company managed to evade the proposed sanction of being broken up into two separate firms through appeals and prolonged negotiations with the government.
Such cases demonstrate the complexity of holding big tech companies to account, particularly given the extensive resources they have at their disposal to negotiate with. Alongside this is the fact that big tech companies tend to operate across several major domains (i.e. search engines, cloud services, AI tools, etc), providing crucial, high-quality services that the general public, businesses, and even governments have become dependent on.
In light of this, it’s hard to envisage exactly how much difference the AI Bill’s legal enforcement of the agreements already signed voluntarily by tech companies will make. This is especially true given that most big tech companies are not even fully under the UK government’s jurisdiction, with the majority operating primarily in the US.
Is the AI Bill too focused on big tech companies?
Even putting the consideration of enforcement aside, the intent of the UK government to focus on a small number of the most powerful AI model developers has attracted criticism from innovators in the tech industry, Thomson among them, due to its limited scope of application.
“I and many others in the industry had hoped that the government’s proposed AI bill would clarify the UK’s position, laying out a clear plan for the implementation of a robust framework to safeguard user data and build trust in the industry. Instead, it looks as if the bill will be a small-scope piece of legislation, limited to preventing general-purpose foundation models from ‘causing harm’. From the little information we have on the bill so far, there’s been no mention of specific, narrow-purpose AI models, suggesting many use cases might continue to go unlegislated. And, crucially, there’s still been no acknowledgement of consumers’ and businesses’ very real data concerns.”
Dan Thomson, founder and CEO of AI Replica builder, Sensay
In defence of the UK government, further legislation relating to cybersecurity and data privacy does seem to be on the agenda, with the Digital Information and Smart Data Bill (DISD), and a new Cyber Security and Resilience Bill also mentioned in the King’s speech.
However, these additional Bills remain relatively limited in terms of regulating the real-life applications of AI. Instead, they are really more focused on fuelling collaborative innovation projects and improving cyber resilience. The DISD, for example, is expected to streamline the use and sharing of data, providing exemptions from data-sharing restrictions for scientific research, while the Cyber Security and Resilience Bill will mainly focus on gathering intelligence on ransomware attacks.
This means that when it comes to overseeing the actual development and deployment of AI, the UK’s forthcoming legislation is almost solely focused around just big tech companies and foundational model developers. This approach runs the risk of spending time and energy on creating restrictions for foundational model developers that may not even be enforceable, at the cost of leaving the many applications and sub-sects of AI development insufficiently monitored, and unprepared for compliance requirements now impacting the global tech industry as a whole.
This could have unwanted consequences on the UK tech industry, driving investment into other economies with greater computational resources, such as China and America, or to Europe, where more holistic and balanced regulatory practices are already in the process of implementation.
According to Andrew Perry, AI Ethics Evangelist at intelligent automation company, ABBYY, the successful regulation of AI requires a holistic, balanced, and ‘whole-of-society’ approach, as opposed to the occasional piece of more stringent legislation that has little relevance to the industry as a whole.
“Given the disruptive global impacts of AI, the need for legal certainty governing the use of AI systems is a desirable ambition. However, it’s important to get the balance right between ESG-focused regulation on one hand, and allowing businesses enough creative freedom to make advances in AI on the other. The bottom line is that operationalizing trustworthy AI demands a ‘whole-of-society’ effort that embraces a combination of approaches including voluntary codes of ethical AI best practices, AI standards and risk management frameworks, augmented by practical regulation that balances innovation while safeguarding against its adverse impacts.”
Andrew Pery, AI Ethics Evangelist at intelligent automation company ABBYY
What kind of example does the EU AI Act set for the UK?
According to Eduardo Crespo, EMEA Vice President at digital operations company, PagerDuty, the UK is on track to follow in the footsteps of the EU AI Act, implementing regulation that balances innovation with risk management.
“The UK is likely to follow a similar roadmap as Europe. Shifts in power at Westminster have pushed AI policy to the top of the agenda, as the new government seeks to provide clarity on managing innovation and unlocking value, while ensuring safe and ethical use. As rules start to apply with the EU AI Act, the clock is ticking to sort governance frameworks. While there is typically a transition period before enforcement begins, giving businesses time to comply, the EU AI Act is moving swiftly through stages of implementation, and organisations can’t afford to be caught out if they fail to comply.”
Eduardo Crespo, VP EMEA, PagerDuty
But aside from establishing a global standard and timeframe for AI regulation, what kind of example does the EU AI Act set for the UK in terms of focusing legislation of big tech companies?
While the EU AI Act is an example of holistic and standardized AI regulation that has implications for companies of various sizes and sectors, its risk-based framework means that it does include measures that specifically target the big tech companies and foundational model developers.
This is becoming evident in the Act’s implementation schedule, which started at the beginning of this month.
So far, the EU AI Office has launched the AI Pact, calling on AI system providers to voluntarily commit to the Act’s key provisions. However, in 6 months (2nd January 2025), there will be sanctions for companies taking unacceptable risks with the technology – and after this, in 12 months, obligations will be enforced for the major providers of general-purpose AI (GPAI) models.
Thus, we can see the legislation gradually zeroing in on the big tech companies in a way that is similar to the proposed focus of the UK’s AI Bill. However, the enforced obligations of the EU AI Act seem more focused on implementing transparency and explainability in model training, in contrast to the UK government’s active testing of models for public safety that is implied under the AI Bill.
Specifically, the obligations require all GPAI providers to:
- Provide technical documentation and instructions for use, as well as ensure compliance with the Copyright Directive and publish a summary of the content used for the training of the models.
- Conduct model evaluations and adversarial testing to ensure cyber resilience, as well as to track and report any serious incidents for any models that present systemic risk. What counts as a systemic risk? Any models with high impact capabilities – in technical terms this translates to having computational training demands of more than 10 (^25) in floating point operations.
Notably, these obligations are more lenient for free-to-use, open-source models, which are only required to comply with copyright laws by publishing the training data summary, unless they present systemic risk.
In this way, the EU AI Act not only imposes particular requirements on GPAI developers but also provides objective standards to determine which obligations will apply to which developers. This is beneficial for two key reasons:
- It helps to prevent potential manipulation of the regulations by big tech companies. Lack of objective standards to base enforcement actions on means that it is easier for these companies to twist the regulations in their favour, especially given the influence they already have in governments.
- It helps companies prepare themselves for the implementation of the Act, enabling them to independently determine the level of risk that their development/deployment of AI involves, and ensure their compliance with the associated obligations in advance.
Crespo highlights the importance of transparency in AI legislation for businesses in general and argues that this will be equally important in the UK, where currently there is a decided lack of transparency.
“With the EU AI Act coming into force as of August, it is of paramount importance for business leaders to understand their strategy and compliance programme around AI. Although this can be viewed as an arduous and time-consuming undertaking, businesses in the European Union need to have internal procedures ironed out to take advantage of AI. This is also key for the UK, as we look to legislate around emerging technology.”
Eduardo Crespo, VP EMEA, PagerDuty
Overall, the risk-based legislative framework of the EU AI Act promises to be an effective way for governments can target legislation at big tech companies and foundational model developers fairly. Thus, it could set a helpful precedent for the UK government, who are set to review and make any changes to the AI Bill over the next couple of months, demonstrating how targeted legislation can be enforced through a standardized and objective framework.
It also ensures the resilience of the Act in the face of unforeseen developments in the tech industry, which is especially important at a time when Generative AI is shifting power dynamics between the big players.
Do big tech companies have an unfair advantage with ESG policies?
The ultimate goal of all AI legislation is to ensure that all companies utilizing and developing have sustainable and ethical ESG (environmental, social, and governance) policies in place. In this domain, however, big companies and established organisations have a major advantage over smaller players, typically having more experience and access to better resources.
As Pery points out, this then has a direct knock-on effect when it comes to AI legislation, because companies with established ESG policies are less likely to struggle to implement transparency and accountability in their workflows.
“Companies with existing strong ESG commitments are better positioned to comply with AI risk management and quality management frameworks and regulations such as the EU AI Act, as it gives them a head start in transparency and accountability.”
Andrew Pery, AI Ethics Evangelist at intelligent automation company ABBYY
Furthermore, Iddo Kadim, field CTO of AI-centric solutions provider, NeuReality, points out that the ESG policies required by AI legislation could create a disadvantage for tech startups, and not just because they lack established ESG policies as new companies.
Operating at the interface of the technically-focused foundational model’s developers and the fluctuating markets of different industries, tech startups are most likely the ones to bear the brunt of the ESG complications that arise when pure technological proficiency meets the chaos of the real world.
“Overall, the companies most affected by regulatory measures are the ones that build AI applications. The more risk associated with their application, there will be more eyes on their product to ensure it meets regulatory requirements or run the risk of devastating consequences, both financially and reputation.”
Iddo Kadim, field CTO at NeuReality
So how can this disadvantage be mitigated? According to Kadim, the responsibility for ensuring the compliance of ESG policies with AI legislation in actual use cases is a burden that should be shared, which would mean foundational model developers implementing practices to guarantee data privacy, robust cybersecurity, and environmental sustainability.
“Companies that build infrastructure for AI development and deployment can help companies that build AI applications by implementing and enforcing privacy, security controls and helping minimize energy consumption.”
Iddo Kadim, field CTO at NeuReality
This consideration is another major reason why AI legislation should target big tech companies and foundational model developers. But perhaps more importantly, it also highlights that legislation needs to target these companies comprehensively, addressing not just the risks that take up the spotlight, such as bias, data privacy, and security, but also risks such as high energy consumption and inequity.
The high energy consumption rates of foundational models, for example, are widely recognised as an ongoing issue and disadvantage of AI, but are not typically classed as a ‘risk’. More to the point, it is not considered a risk that foundational model developers are held to account for.
Instead, most big tech companies are focusing on getting their hands on as much of the Earth’s natural energy supplies as they can to fuel the growing computational demands of their technology, often with little regard to the local impact of their demands on local environments.
Arguably, a focus on sustainability is an aspect of ESG policy that is largely lacking from AI legislation, particularly from the targeted legislation for big tech companies that are most responsible for the soaring energy demands resulting from AI.
Despite this, environmental sustainability is widely recognised as a cornerstone of responsible ESG practices, and companies will soon be legally required to track and report the environmental impact of their practices.
“Environmental sustainability is another important ESG factor. The storing of large volumes of data for Generative AI can be energy intensive, and to be compliant with ESG regulations companies will need to track and report their emissions from AI use. This could encourage them to consider more energy-efficient options such as purpose-built AI for specific tasks, which is based on more energy efficient Small Language Models.”
Andrew Pery, AI Ethics Evangelist at intelligent automation company ABBYY
Nevertheless, as Kadim points out, it is too soon to tell whether this will make any real impact, especially given that it is not mentioned as a prerogative for governments and policymakers in regulating AI. Nor is there yet any signal that the tech giants will be legally obliged to take particular responsibility for the impact of AI on the environment, despite their major role in exacerbating the climate crisis.
“Historically, there has been extensive debate between the goal of making money, with innovation as its proxy, and responsibility to society. The intent of the AI Act encourage and rewards responsible AI innovation. As with any regulation, the actual interpretation and implementation will determine how successful the regulation is in achieving their goal. As of now, it is yet to be seen. Ultimately, sustainable and cost efficient AI solutions should be the goal of organizations across the globe. For society to trust AI, it must be safe, secure and sustainable. How could societies really trust an AI that makes the planet less habitable and society more dangerous? We must create an environment in which the companies that prioritize protections, people, or the planet, will thrive more than the rest. I am happy to see a good effort on protecting people’s privacy when working with AI but more must be done for the planet in terms of AI development.”
Iddo Kadim, field CTO at NeuReality
In conclusion, big tech companies do seem to have a significant advantage in adapting to current and upcoming AI legislation – and this is primarily due to the ESG strategies they already have in place and the resources at their disposal to develop new strategies. For tech startups and users of AI technology, developing robust ESG policies could be significantly more challenging and require more legwork.
However, as Pery points out, AI itself might present a solution to this apparent inequity, providing automated monitoring and reporting on ESG policies that can help companies better ensure their compliance with emerging legislation.
“It’s a double-sided coin: The need to comply with ESG policies encourages more AI accountability among businesses, but AI will also increasingly have an impact on informing ESG policies. AI technologies can improve the accuracy of ESG monitoring and reporting and predict and mitigate risks to improve ESG credentials.”
Andrew Pery, AI Ethics Evangelist at intelligent automation company ABBYY
There is a certain irony to this, in the fact that it is thanks to the foundational models and big tech companies that the rest of the industry can mitigate some of the disadvantages they encounter in competing with these giants. But this ironic power dynamic is certainly not a new phenomenon and serves to remind us of the key role that big tech companies and foundational model developers play in the industry’s ecosystem.