This article is written by Rudraksh (Rudy) Bhawalkar and Ray Eitel-Porter, Accenture.
We live in the world in which AI is ever-present, and that passively influences the simplest
of decisions in our daily life. Think about it. You wake up and opens the phone with a face
ID/ analysed by AI, your route to work is likely influenced by traffic and weather information
provided by AI, and you might wind down in the evening, watching shows recommended by
AI. Its easy to see how AI makes daily life easier, but as its influence grows, so too do the
potential risks, and therefore the need to make sure AI is being developed and used
responsibly.
In the absence of industry wide standardized rules and principles for the ethical and
responsible use of AI, who should make decisions about what is an isn’t ok? Is it consumers?
The companies developing AI? Regulatory bodies? Governments?
In 2021, the European Commission published its proposal for a draft AI Act in an attempt to
comprehensively regulate and set standards for the development of secure, trustworthy and
ethical AI. Final ratification is likely to happen in late 2022 – early 2023, following several
years of consultation. Though the proposed regulation will evolve and continue to codify
best practices already adopted by AI leaders, early adopters have the opportunity
to differentiate and accelerate their AI capability.
The proposed regulation, if approved, will apply to “AI systems,” – a broad definition that
includes software which can, for a set of human-defined objectives- , generate outputs such
as content, predictions, recommendations, or decisions influencing the environments they
interact with.
Software qualifies as an “AI system” if it is developed using one/more of the following
approaches/techniques:
• Machine learning approaches, including supervised, unsupervised, and
reinforcement learning, using a wide variety of methods including deep learning;
• Logic- and knowledge-based approaches, including knowledge representation,
inductive (logic) programming, knowledge bases, inference/deductive engines,
(symbolic) reasoning and expert systems; and/or
• Statistical approaches, Bayesian estimation, and search and optimization methods.
The proposed regulation does not apply to “AI systems” developed or used exclusively for
military purposes, or to public authorities of third countries or international organisations
where “AI systems” are used as part of an international agreement for law enforcement with
the EU or with one or more Member States.
The proposed regulation applies to providers of AI systems (the developer that offers the AI
system on the market) and users of AI systems (using an AI system under its authority) as
follows:
• EU and non-EU providers that place AI systems on the EU market;
• EU users of AI systems; and
• Providers and users of non-EU AI systems, if the output of the AI system is used in
the EU.
The proposed regulation, if approved, will differentiate between uses of AI according to four
different risk categories, based on four levels of risk posed to European values, established
based on the level of impact on fundamental rights:
• Unacceptable/Prohibited: Prohibiting certain AI systems as unacceptable uses of AI
including social scoring, non-authorised use of biometric identification systems,
manipulative use cases, exploiting vulnerabilities of individuals or groups.
• High: Imposing extensive requirements on high-risk AI systems that affect Health &
Safety of human and fundamental rights such as discrimination or putting humans
at physical risk.
• Limited: Imposing specific transparency obligations on certain AI systems to make
users aware that they are interacting with an AI system to prevent the risk of
manipulation
• Minimal: Imposing minimal requirements on low-risk AI systems. Providers may
voluntarily choose to create and implement codes of conduct which apply the EU’s
proposed requirements for high-risk AI systems.
The proposed regulation echoes existing Responsible AI best practices like robust risk
management, mature data governance, record keeping, traceability, transparency and
monitoring to name a few. Based on these best practices, organisations should strategically
focus on building broader Responsible AI capabilities, rather than just focusing on regulatory
requirements. Organizations should see this proposed regulation as the catalyst to move
from reactive compliance, to the proactive development of mature Responsible AI
capabilities that will generate wider value.
Through our work helping leading organisations develop robust RAI capabilities, Accenture
has developed a comprehensive Responsible AI Methodology, which is used to help
organizations work through the four pillars of Responsible AI (Principles + Governance,
Risk Policy and Controls, Technical, Cultural). Each pillar has a series of interlocking steps
that can be taken, and organizations work through these steps and pillars to strengthen their
Responsible AI capabilities. When organizations have this framework in place they’ll be well set up to address any new regulation and assess its impact on their business without starting from scratch each time. Stay tuned for our upcoming research, that will share more detail on this approach, and practical guidance for organizations to help prepare for the forthcoming regulations.
Disclaimer – Nothing in this article should be construed as legal advice and this article is not intended to be a substitute for legal counsel on any subject matter. Accenture is not licensed to provide, nor should be regarded as providing, any legal or accounting or related services