Business leaders across Europe are calling for the EU AI Act to be paused, and with good reason. Currently, the Act risks stifling innovation before it even begins. The uncertainty alone will scare off investment, just when the US and China are ploughing ahead with a “build first, regulate later” approach. If Europe isn’t careful, it will fall behind.
Of course, AI brings risks. Governments absolutely should step in when harm is clear. However, in its current form, the Act targets the wrong thing. Instead of stifling the technology, we should focus on the people who misuse it.
Unanswered Questions and Unnecessary Hurdles
The voluntary code published in July doesn’t help. It’s vague, confusing, and creates a regulatory mess that companies can’t easily navigate. A better approach would have been to publish clear guidelines, allowing businesses to understand and prepare for what was to come.
Instead, we’re left with basic questions unanswered:
- What exactly counts as a “general-purpose AI model”?
- When does it become “systemic risk”?
- And who’s actually considered the “provider”?
Right now, nobody knows what’s in or out of scope. And there’s even less clarity for the vast majority of businesses – the downstream users. Most companies don’t build AI models; they use them to build solutions. Why create barriers for them when they’re the ones actually driving efficiency and growth?
The Act also introduces copyright considerations, requiring AI providers to comply with EU copyright law and publish summaries of their training data. In theory, that sounds like transparency. In practice, it’s pointless. Courts have already dismissed most of these copyright claims, and a legal system is already in place to address them. This merely adds red tape without providing any additional protection.
The Real Problem Is Misuse, Not the Technology
The real problem isn’t the technology, it’s what people do with it. AI is already being used maliciously, whether that’s generating phishing emails, powering cyberattacks, or even being weaponised in the Ukraine/Russia war. IBM found that AI was involved in one in six data breaches last year.
But here’s the thing: even if I build the cleanest, most transparent, bias-free model in the world, it won’t stop someone from misusing it. That’s why the smarter approach is to regulate misuse, not the model itself.
Right now, Europe is effectively telling AI companies: build here, and you’ll pay more, do more, and face more lawsuits. It’s hardly an incentive. Meanwhile, the US and China are moving at full speed.
The UK and US Forge a Different Path
The UK, by comparison, has taken a distinctly different approach to regulating AI, with a more agile framework that strikes a balance between innovation and safety. Back in June, plans to regulate AI were delayed, as Peter Kyle, the technology secretary, plans to introduce a more comprehensive AI bill in the next parliamentary session.
We can expect to see discussions around what this bill will look like ramp up now that parliament has returned after the summer recess, especially given the recent appointment of Jade Leung, the Chief Technology Officer of the AI Security Institute, as the Prime Minister’s AI advisor. The signs are positive, as the UK government aims to align the timings of its AI framework with Trump’s AI policy in the US.
America has already made it clear that the new policies aim to accelerate U.S. AI innovation and reduce regulatory barriers. Time will tell, but the hope is that efforts will be made not to discourage AI companies from doing business in the UK through overregulation.
The Opportunity Is Too Great to Fear
The truth is, AI has already done far more good than harm. Sam Altman even predicted it could create the first one-person billion-dollar company. That’s the scale of the opportunity we’re talking about.
If Europe really believes AI is the future, then we need to look forward with optimism, not backwards in fear. If we want proper protection, we should start with people, not the technology.



