Ethics

How can organisations ensure ethical AI deployment?

By Peter van der Putten, Lead Scientist and Head of the AI Lab at Pegasystems

As artificial intelligence (AI) continues to mature and more businesses leverage this technology in their operations, it is important that the spread of the technology is done in a trustworthy and responsible way. Guided by regulations such as the European Union Artificial Intelligence Act (EU AI Act), corporate strategies and ethical policies, there is a desire for AI to be adopted in a safe way – but what are the right steps to follow?

Do we even need AI ethics?

Why would we need AI ethics in the first place, and what is it? Some people equate ethics to soft policy rather than hard rules, others see it just as complying with some rules, whether law or internal policies. But ethics in general is simply about how to lead a good life, which means it can be an input into regulation, but also provides guidance on how to best use a technology like AI within the constraints of what is allowed.

The key word here is ‘best’ – and for whom. Organisations need to ensure that AI is aligned to deliver on corporate mission statements and organisational values, but also benefit other external stakeholders such as customers, clients, and partners. If you do that well, your customers will demand that you use AI rather than being cagey about it. These simple rules like ‘who benefits’ and ‘don’t do to others what you don’t want to be done to yourselves’ are the cornerstone of an ethical approach to AI, and for trustworthy AI principles such as fairness, privacy, transparency, and accountability.

Putting this in practice

This may sound abstract and theoretical, or perhaps as overreach if we need to apply this everywhere we use AI at the same level of detail. But for instance, the EU AI Act is quite pragmatic about this. Rather than aiming to regulate AI in general, it regulates AI at the level of a particular AI system, built with a specific purpose and benefactors in mind, and with different levels of risk to do harm.

From that, these uses are categorised as unacceptable risk (prohibited uses), high risk (more scrutiny and documentation), limited risk (more transparency required), and minimal risk. This sets some minimal requirements, but a lot of organisations take this further.

Zooming in on fairness

As a customer or citizen, the above probably sounds like a sensible approach to building fair AI systems, in the sense of fair as ‘just’. But within AI ethics, fairness also refers to a much narrower principle, namely making sure that AI decisions and outputs are not discriminating against certain protected groups.

There could be many sources of this bias, such as the rules built into the system, and the input data at runtime, but for machine learning models, a main source could be the experimental setup, i.e. what data was used to train these models. A real-world example was a model for preventative healthcare that was trained to predict future healthcare costs rather than the future occurrence of a disease, thus disadvantaging patients with historically less access to care. Or in generative AI, models are trained on sources such as online forums, or very old books as these are free from copyright.

Given that bias is mostly introduced by design choices that people make, this also means efforts can be made to reduce bias within acceptable limits. This starts with measuring and then addressing bias, also beyond the development phase, by fixing logic, data, and model issues. Through ongoing monitoring and continuous feedback loops, real-world testing is critical to ensure that AI systems are aligned with ethical guidelines, also for other principles such as privacy, accountability, transparency, and sustainability.

The future of AI and ethics

Whilst some organisations may be concerned that increased regulation could stop innovation, that is not the case. Those higher regulated industries that have deployed responsible AI, like finance, tend to attract greater investment and drive more stability in the industry. Just look back at other technologies: the aviation industry would never have flourished if it wasn’t clear whether it was safe to get on a plane.

As AI adoption continues to grow, businesses that focus on deploying this technology in a responsible way will be able to scale their AI-driven operations much easier and drive end-user and customer acceptance. By ensuring that the systems align with user needs, organisations can drive AI adoption without the risk of monetary and reputational damage and use it to realize their corporate mission and also benefit customers, clients, and citizens.

Preparing for more agentic and autonomous intelligence

How enterprise AI appears to be developing into ever more autonomous systems does present new challenges. We have already seen AI developing with the emergence of autonomous agents. This may not mean more regulation because one of the existing laws is flexible enough to accommodate this development. For example, even though the EU AI Act does not specifically name agentic AI, it does provide a structured framework to regulate all AI applications based on their potential risk levels.

As such, it is important that agentic AI is deployed with a high degree of transparency, accountability, governance, and control. For instance, use agents to review and critique the execution plans that these systems use, tightly limit the tools that are available to specific agents, and let agents understand when certain asks are out of their scope, or when during execution it is important to pull the human back in the loop, for instance to seek approval to take critical actions or use certain powerful tools. Also, a view into all automated steps taken by these agents, at different levels of abstraction, is key for interpretability, transparency, and accountability.

As the use of AI grows, more organisations will undoubtedly be using this technology. Businesses that proactively address ethical concerns and implement measures to enhance transparency will be those who not only meet regulatory requirements but also gain a solid competitive edge by building stronger trust with customers and stakeholders. The ultimate goal of AI should be enhancing human decision-making, driving positive societal outcomes, and ensuring that AI is a force for good.

Author

Related Articles

Back to top button