Ethics

How Should Businesses Respond To The EU’s First-Ever AI Law?

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

That’s the question that The Law of Tech is going to tackle at the AI Summit in London together with our team Coran Darling (Associate (Intellectual Property & Technology) – DLA Piper), Kerry Sheehan (Head of Service Development and Innovation; Business, AI and Communication UK – Civil Service), Racheal Muldoon (Barrister at Maitland), Var Shankar (Director of Policy at Responsible AI Institute).

As our great friend Uwais Iqbal (Founder at Simplexico) would say on these occasions: defining terms is usually a good place to start when working with something complex. Let’s break this down. There are at least three broad areas to be addressed within this question:

  • Define what AI is;
  • Understand the business needs in light of AI; 
  • Explore the implications of the new EU AI Act. 

Let’s start with no. 1. What is AI? Let’s get to the point and provide a starting definition that is so dear to all lawyers out there. Let us quote our favorite definition that we have just recently learned from the course ‘The Fundamentals of AI’ by Simplexico: ‘We can think of artificial intelligence (AI) as a technique that uses machines to replicate the problem-solving and decision-making capabilities of the human mind’. 

What’s important to consider here, before moving forward, is to understand that not everything that is automated can be considered AI so to speak, and the same counts for automated legal workflows. The more you dig into the technical aspects, the more such definition might become more complicated, however, a further categorisation can help us to identify 2 major groups within this technology: 

  • Strong AI also known as General AI or Artificial General Intelligence (AGI); 
  • Weak AI also known as narrow AI (basically most of the examples available in the market today). 

Here comes no. 2. Understand the business needs in light of AI. Within the industry context, it’s interesting to observe as almost 85% of AI and Machine Learning projects fail to deliver – not so reassuring to start, isn’t it? This is for sure a point we would need to tackle together in our panel at the AI summit to properly understand the reasons behind these concerning stats. Perhaps more time and resources are needed to better frame the problem instead of jumping into the solution? Food for thought.

Finally, a more legally oriented answer with no. 3. How to interpret the EU AI Act in light of these developments. Well, that’s something we hope to tackle more in-depth with our fellow panelist, at least scratching the surface of the research question from a legal perspective. What we have found particularly interesting and concerning at the same time, is the desperate need for the legislator to categorise technology into siloed boxes; something that we find quite utopic from our personal understanding of technology more as a constantly evolving flux of developments. 

Particularly with AI, we have reason to believe the state of the art with tech and the human perception of such changes are traveling still on the same roads, but at two completely different speeds.

According to the LEGISLATIVE TRAIN 04.2023 – the Commission has proposed a different set of rules in line with a risk-based approach. In this sense risk categories of AI can be distinguished, namely:

  • Unacceptable risk AI. Harmful uses of AI that contravene EU values (such as social scoring by governments) will be banned because of the unacceptable risk they create; 
  • High-risk AI. A number of AI systems (listed in the Annex to the Act) that are creating adverse impacts on people’s safety or their fundamental rights are considered to be high-risk. In order to ensure trust and a consistently high level of protection of safety and fundamental rights, a range of mandatory due diligence requirements (including a conformity assessment) will apply to all high-risk systems; 
  • Limited risk AI. Some AI systems will be subject to a limited set of obligations (e.g. transparency);
  • Minimal risk AI. All other AI systems can be developed and used in the EU without additional legal obligations than existing legislation.”

And now, a simple observation: how can we consider those categories from a down-to-earth, practical point of view? Is the Legislator really in touch with the reality that people and businesses are facing on a day-to-day basis? 

We cannot wait to discuss this together at the AI Summit in London on 14 June 2023 at our panel ‘How Should Businesses Respond to the EU’s First-ever AI Law?‘.

Don’t miss this opportunity! If you’re interested to join the panel, register here for the conference and feel free to send us a message to continue the conversation. 

And keep an eye out on our website for more content of legal AI as we have some exciting announcements and updates to be shared soon.

Support. Empower. Connect. This is The Law of Tech. 

Author

Related Articles

Back to top button