EthicsFuture of AI

From regulated innovation to privacy protection: the EU’s AI Act and perspectives on what it means for business

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

On 13th March 2024, the European legal courts passed the AI Act, the most extensive and detailed legislation to regulate AI that the world has yet seen. While still awaiting its final ratification (expected to take place at some point in April), it is clear that the AI Act is here to stay and will be shaping the future of AI over the coming years. So far, the Act has elicited a variety of opinions from people across different sectors of the business and tech community.

However, while the impact of the Act will remain speculative in corporate spheres until it is actually implemented, there is one area where it may already be having a tangible effect. This is public sentiment towards AI.

One of the most commendable aspects of the AI Act is in the protection it provides for civilians. Under the Act, several applications of AI that have been deemed as a threat to citizenship rights and democratic society have been banned:

  • Biometric categorization systems based on sensitive characteristics such as race, political affiliation, religion, and sexual orientation
  • Predictive policing
  • The scraping of facial images from the internet or CCTV footage to create facial recognition databases (with the exception of emergency use-cases such as a reported kidnapping or terrorist threat)
  • The monitoring of emotions in workplaces and educational institutes
  • Social scoring based on past and present behaviour and personal characteristics
  • AI systems that manipulate human behaviour to circumvent freewill
  • Uses of AI that exploit human vulnerabilities such as age, disability, or socioeconomic status

These prohibitions, which address both actual and just hypothetical applications of AI, provide reassurance for the general public and help to diminish futuristic fears such as the notion of an all-knowing, all-anticipating AI lord who monitors our every move.

And while the Act has still been criticized by some advocates for not providing enough protection for citizens, it is a major step in winning over public favour towards AI by demonstrating the human-first priorities of AI governance in the EU.

How well does the Act address the concerns that businesses and employees have surrounding AI?

Although the AI Act does a sound job of protecting citizen rights, doubt remains within the corporate world as to whether the Act will address the real challenges that businesses face from the widespread adoption of AI.

Natália Fritzen, AI Policy and Compliance Specialist at Sumsub, expresses concern over the Act’s ability to tackle the abuse of AI by bad actors, which is a major threat that compromises the security and confidence of businesses in their adoption of AI.

“Regulators are playing checkers while fraudsters play chess. The EU AI Act is promising – we commend its aim to regulate AI risks. However, doubts loom over its ability to tackle increasingly popular AI-powered deepfakes. Deepfakes grew 10x in 2023; as the threat continues, we are not convinced upcoming measures will sufficiently safeguard business and the public. Policymakers must supplement the Act with stringent proactive measures. The Act’s emphasis on transparency from providers and deployers of AI systems is a step in the right direction. However, as shown by Margot Robbie in Barbie, lawmakers must acknowledge real-world atrocities and step away from a seemingly utopian regulatory landscape – where an EU AI Act is simply ‘Kenough.’ Deepfake regulation is an evolving field, and policymakers and governments must collaborate closely with private businesses, acknowledging their frontline role in combating AI-related illicit activities, to establish a robust regulatory framework.”

Natália Fritzen, AI Policy and Compliance Specialist at Sumsub

The lack of provision in the Act for regulating risks isn’t just limited to the domain of deepfakes. It may represent a more general limitation of the AI Act. According to Daniel Christman, Cranium Co-Founder and ex-cybersecurity strategist at KPMG, “something that should be in there but isn’t, is that there is no requirement for red teaming all foundational models, which can create many security and safety issues, as it has been proven repeatedly that essential safeguards can be circumvented”.

Another aspect of AI adoption which has caused significant concern within the business community is the rise of automation in the workplace and the resulting takeover of many lower-skilled jobs. The AI Act has done little to explicitly address this concern, perhaps because it is considered an issue best handled by businesses internally.

In an interview with the AI Journal, Laura Baldwin, President of O’Reilly, pointed out that the Act is requiring companies to take more responsibility for their use of AI. A key part of this is providing employees with more education about how AI is impacting them, and how they should be using AI in their jobs.

“A huge component of anything around AI right now is getting people educated. And that doesn’t mean just being educated in the technical details like the developers who are building the LLMs. It’s for the marketing assistant, for example, who’s sitting there writing a social media copy, but now has a way to make that happen faster with AI as a productivity tool. There’s so much fear mongering out there because the bulk of the population still thinks about AI as this thing that is going to end the world, as opposed to thinking about it as a tool. The mindset we’re trying to create in the company and externally is: how do you use AI to make yourself better? And how do you educate your teams to use it to make them better? Companies have an obligation to have tools at the ready to help train their people on this very complex, but also very empowering technology… I think you’ve also got to make sure that inside companies there is somebody responsible for thinking about where their AI is going, who can be the police officer inside the company.”

The dawn of regulated AI: how well does the Act enable AI innovation and development?

A prevalent concern within the tech community is that the increased regulation enforced by the Act will stifle innovation, and slow down AI’s current rate of development. In an attempt to promote responsible AI innovation, the Act contains an innovation package that will provide testbeds and sandboxes to encourage experimentation with AI in a safe and monitored environment. Alongside this, the Act also allows for the possibility to test high-risk AI systems in real world contexts as long as safeguards are in place. However, it is questionable whether these provisions will measure up to the free and unlimited opportunities that have been available to AI developers up till now.

Fawaz Naser, CEO of Softlist.io, applauds the concept of sandbox environments, but argues that they do not constitute an overall benefit to businesses in terms of their development and implementation of AI.

“[With sandboxes] businesses can test AI with real users under the supervision of authorities, yet they’re not permitted to commercialize the AI. The fundamental concept of the sandbox is commendable, but its effectiveness will hinge on how it’s implemented… I personally believe that the problem with this Act, as it currently stands, is that it doesn’t really offer benefits to AI developers and users. There’s hardly any part of the Act that makes it easier for companies to develop, test, use, and roll out AI compared to how things were before the Act.”

Fawaz Naser, CEO of Softlist.io

However, according to Laura Baldwin, the innovation package provided under the AI Act could be a game-changer for AI innovation by opening up resources and compute power to smaller businesses and startups.

“The piece of the AI Act that really takes care of smaller companies is the AI innovation package. The reality is, smaller companies need access to supercomputers, they’re not going to be able to spend the money to do that stuff themselves or to buy off pieces of that in the cloud. That’s where the cost of AI implementation is hard. Building your own RAG model, for example, is very difficult, very expensive. But the innovation package gives access to those supercomputers to smaller companies and startups. That in itself is a phenomenal component of the Act and one that small companies, midsize companies, and startups need to pay attention to.”

The provisions of the innovation package are also going to have a knock-on effect on investment trends, which she suggests will promote investment and increase innovation.

“It’s easier to invest in a biotech startup, for example, where the cost of the supercomputer usage is going to be borne by the government and not by the investor themselves. You’re also not going to have to rebuild that infrastructure for every single company. I think that should spur innovation and a lot of investment because the investment costs won’t be as great.”

Sylvester Kaczmarek, CTO at OrbiSky Systems also points out that the Act could alter investment priorities, leading to a shift in market trends and extra incentive for companies to promote their ethical standards and legal compliance. This is likely to foster innovation on a more distributed and sustainable scale, opening up market opportunities to startups and companies that have already adopted systems to integrate AI in a more targeted and accountable way.

“The Act is likely to influence investment trends, with a possible shift towards startups and projects that prioritize ethical AI development from the outset. This could alter power dynamics within the AI sector, favouring entities that align with these new legal and ethical standards. In addition, the Act reflects a growing public demand for greater transparency and accountability in AI, signalling a shift in how AI is perceived and integrated into society.”

Sylvester Kaczmarek, CTO at OrbiSky Systems

While it remains difficult to predict the net impact of the AI Act on technological development, it is important to remember that it will undergo regular revision. As Daniel Christman points out in a statement for NTD news, this enables the Act to keep up with the natural pace of innovation, and mitigate unintended consequences as and when they arise.

From ‘best practice’ to legal requirement: new priorities for businesses as they navigate the legal AI landscape

The phrase ‘best practice’ has perhaps been overused when it comes to the use of AI within business. But it may be missed now that many best practices are turning into legal requirements with non-compliance fines of up to €35 million or 7% of a company’s annual global turnover. For most SMEs and startups, however, these fines would be lower, taking into account that these companies have less resources and personnel to dedicate to regulatory compliance than larger or well-established companies that typically already have a legal team or established framework for ensuring compliance with industry standards.

Nevertheless, non-compliance fines provide a big incentive for businesses to ensure that they stay up to speed with emerging legal requirements, utilize the free resources and guidance provided by governments, and implement internal strategies to monitor their use of AI.

“The transition demands a shift in organizational mindset, prioritizing not only the ethical but also the legal dimensions of AI development. Businesses must now integrate compliance into their operational DNA, ensuring AI solutions are designed with transparency, accountability, and security at their core. SMEs, in particular, face challenges in navigating this new landscape. The key to survival lies in adopting a proactive approach to compliance, investing in scalable AI solutions that can adapt to evolving legal frameworks, and seeking strategic partnerships for compliance support.”

Sylvester Kaczmarek, CTO at OrbiSky Systems

Furthermore, with its self-classification risk system, the Act will require businesses to become more proactive in evaluating the role of AI in their business strategy, and consider their longer term plans for AI implementation so that they can prepare to meet the different legal demands that each risk category requires.

Given the majority of current use-cases for AI in business, most companies will fall into the low/minimal risk category. Nevertheless, adopting a forward-looking approach which takes into account not just the company’s independent goals, but also any potential collaboration with partners in the higher risk categories, is key to being prepared for the implementation of the Act. Regular revision of a business’ AI systems is also crucial to maintaining regulatory compliance while allowing for growth and development.

Vendors and AI providers face further responsibility, given their role in developing the AI technology that is then used on a wider scale for many different applications in business with different levels of risk.

  • They must ensure that their models are trained on high quality datasets from a wide and diverse range of sources to mitigate the problem of bias.
  • They will also need to provide transparency on how their models are developed. Large AI models will be required to share this information with AI providers further down the chain to avoid any misuse cases.

Clear and accurate communication between AI vendors and clients will become a must under the Act. And this goes both ways, not just from vendors to clients, as Fawaz Naser points out.

“Users of high-risk systems have their own set of responsibilities too. These include monitoring for potential risks and notifying providers about any serious incidents. Collectively, these regulations are designed to foster an environment where AI development and usage are conducted responsibly.”

Fawaz Naser, CEO at Softlist.io

What does the Act mean for global collaboration and intercontinental competition?

Many believe that the Act will give rise to a more ethical, accessible, and fair approach to the use of AI within not just Europe, but societies across the globe.

Daniel Christman argues that the Act will set a meaningful precedent in the regulation of AI, given the leading status of the European Union as an instigator of global standards for human rights and multinational trade negotiations.

“It’s impossible to understate this leading regulation’s impact on the global AI development and deployment environment – likely even more so than GDPR. Given that all models impacting European citizens must comply with these requirements, any international organization developing or deploying models must abide. Many international regulatory bodies look to the European Union to provide a baseline and given the complexity of the challenge to define and implement regulatory requirements on AI, the approval will be a significant catalyst to the global regulatory sphere for AI I expect other countries to leverage the AI Act as a template to modify to support their particular requirements.”

Daniel Christman, Director of AI Programs and Co-Founder at Cranium

On the other hand, there is concern that the AI Act might slow the progress of AI and hinder business opportunities within the EU compared to other parts of the world.

“In my opinion, despite the Act’s detailed requisites, it might still place European enterprises at a disadvantage compared to their American and Chinese peers, who face fewer regulations. However, supporters of the Act argue that the ethical development of AI is not just a moral obligation but also a strategic benefit. Europe aims to become a world leader in trusted AI, drawing in top talent and investments by positioning itself this way. As other countries observe how the European Union’s ambitious AI venture progresses, one thing is clear: the era of unregulated, “Wild West” AI is drawing to a close. The evolution of artificial intelligence will be influenced not only by technological advancements but also by our policy decisions. With the AI Act, Europe is opting for a future where innovation and ethical considerations are aligned — a future in which AI’s potential is fully realized for the greater good, while the rights and safety of citizens are vigorously safeguarded”

Fawaz Naser, CEO at Softlist.io

Sylvester Kaczmarek also raises the point that the Act could disrupt some existing collaboration in AI research and development.

“The EU’s regulatory framework sets a precedent that could lead to fragmentation in global AI development efforts. However, it also presents an opportunity for setting global standards in ethical AI development, encouraging collaboration over competition. Fostering an environment of shared ethical values could mitigate the risks of regulatory divergence.”

Sylvester Kaczmarek, CTO at OrbiSky Systems

Nevertheless, whether the standards put forward under the Act are actively adopted by other countries or not, the effects of the Act will be felt on a global scale simply for reasons of convenience. Any multinational corporations operating on a global scale, alongside any domestic companies which import their AI systems from the EU, will be forced to comply with the EU AI Act in their international dealings. And in these cases, if we look to the GDPR as a precedent, companies are more likely to choose to adopt these standards across the board, as Jonas Jacobi explains.

“Large American corporations that operate globally are already navigating complex regulatory environments like the GDPR, often choosing to apply these standards universally across their operations because it’s easier than having one set of rules for doing business domestically and another set of rules internationally”

Jonas Jacobi, CEO and Co-Founder at ValidMind

Looking ahead

Overall, the Act does potentially risk slowing the progress of AI’s development by restricting its use-cases and creating more legwork for companies in their adoption of AI technologies. However, any global disadvantage this creates for Europe is likely be temporary. The AI Act can be seen as a long-term investment that the EU is putting into AI, taking precautionary measures now so as to build an ethical and sustainable framework that will enable more valuable growth in the future.

In time, and as the real impact of the Act emerges, other countries are likely to follow suit and adopt similar standards for AI governance. It is clear that concerns over AI are being felt on a global scale, though to different extents. In America, for example, Biden’s Executive Order on AI is forecasted to come into force before the AI Act, and suggests that having regulatory compliance standards for AI is already seen as a priority for countries aiming to become world leaders in AI.

It will be interesting though, to see whether similar standards are adopted within non-democratic parts of the world such as China and Russia. While these countries have long been leaders of technological development, they have also been more likely to implement more restrictive and centralized policies for businesses. The actions these countries take in governing AI after the precedent set by the EU AI Act will be revealing about how the Act is viewed internationally and in places which have different values. Is it seen more as a measure to protect citizenship rights in democratic countries at the cost of technological innovation, or as a longer-term investment that will enable the EU to become a world leader in AI development?

Author

  • Hannah Algar

    I write about developments in technology and AI, with a focus on its impact on society, and our perception of ourselves and the world around us. I am particularly interested in how AI is transforming the healthcare, environmental, and education sectors. My background is in Linguistics and Classical literature, which has equipped me with skills in critical analysis, research and writing, and in-depth knowledge of language development and linguistic structures. Alongside writing about AI, my passions include history, philosophy, modern art, music, and creative writing.

Related Articles

Back to top button