Interview

Shaping the Future of AI: Mahault Albarracin on Ethics, Governance, and Sustainable Innovation

Mahault Albarracin is a prominent figure among today’s artificial intelligence speakers, specialising in AI governance, ethics, sustainability, and Active Inference technologies. As Director of Research Strategy & Product Integration at VERSES AI, she leads cutting-edge research into cognitive computing, helping shape the development of ethical, autonomous, and transparent AI systems.

A respected voice in the global AI community, Mahault Albarracin regularly delivers keynotes at major international events, guiding organisations on how to navigate the complex, evolving landscape of responsible AI. In this exclusive interview, she shares her expert insights on the ethical challenges, business responsibilities, and sustainability opportunities surrounding artificial intelligence.

Q: When examining today’s rapidly evolving AI landscape, what do you consider the most pressing ethical challenges, particularly around governance and risk management?

Mahault Albarracin: “AI can have environmental impacts, it can have societal impacts, it can have economic impacts. So, I think the most pressing currently is governance, especially governance complexity, because the current governance frameworks for AI are really insufficient to manage the risks of increasingly autonomous AI systems, particularly those that we hope will be able to have full agency.

“For example, we have to factor in actor, agent, and network risks. Ethical challenges will stem from these different levels of governance, including actor governance, which is focused on regulating developers and providers, but also on agent governance, which is about regulating autonomous agents, and ultimately on network governance, where coordination among multiple agents is going to be one of the things we’ll have to tackle — a little bit like geopolitics.

“Then we also have to care for transparency and accountability, where right now most sophisticated AI either is black box or too complex for most people to explain the decision-making process and therefore understand and possibly justify it. This leads to problems of possible bias and inequality. The training data can be biased, which results in algorithmic bias in itself. It can lead to inequalities in outcomes, which are already baked into our societal systems, so these are just amplified ultimately.

“But the data in itself is not the only source of bias, and so identifying all these requires intense training for a lot of the people in the field. We want to also make sure, again on this point of autonomy, that we have the ability to balance the autonomy of the AI — its ability to learn and make decisions — with the control that we want to have over them, a little bit like a legal system for us.

“For example, autonomous vehicles need to have this proper balance to make decisions that we are okay with them making ultimately. Then there’s this whole idea about privacy violations because there’s a lot of data mining techniques used for training, and sometimes they can be retrieved directly. There are lots of possible attacks that you can do on different models to retrieve actually sensitive data.

“And then there’s the energy consumption aspect of it all — this requires a lot of energy and therefore it has a very high environmental impact. So, we have to be conscious of the carbon emissions and the environmental strain of these models on the world.

“Finally, we also have to consider the human design and the potential for manipulative design, where AI systems can be used to manipulate behaviour, like for example in advertising. We’ve seen it in political campaigns, and this ultimately leads to the possibility of coercive use and loss of informed consent.”

Q: From a business perspective, how can companies practically integrate ethical principles and responsible AI use into their operations, especially when balancing shareholder interests, public trust, and regulatory pressures?

Mahault Albarracin: “This is a really tricky question because ethics obviously is a tense subject. The companies have shareholders to answer to, they also have the public perception, and then ultimately they also have the perspective of the people themselves working in the company.

“There are solutions, like for example we talked about governance — you can adopt a multi-layered governance. So, you implement actor, agent, and network governance within your systems. Ultimately, you try to handle the autonomous decision-making of the AIs, manage the network governance of the interactions between the agents, and you use this governance framework to make sure that whatever ethical framework you do adopt can be implemented within your system.

“You should prioritise transparency, obviously, and trust. Establish the standards for transparency, ensure that all your processes are auditable — this builds trust with your consumers, the stakeholders, and ultimately the regulators. You make sure that processes are understandable and verifiable; this is work that you can do uphill, and this leads us to the idea of ethics by design.

“What you want to do is really incorporate the ethical considerations that you want to put in place right from the start. This implies that you will have an ethics core culture, which means you will have an ethics department that will have some power and the ability to audit all your projects and make sure that they are involved from the start of your project to the end.

“Then obviously, there’s all the regulatory compliance — there’s more and more AI regulations on the global stage. For example, there’s the European AI Act, which focuses on risk categorisation and encourages adherence to technical standards to mitigate certain risks through certain kinds of certifications like the CE mark.

“You could try to follow this regulatory compliance, have your legal department pay attention to the ones that are coming up, and try to establish what you will have to do in advance such that you’re not taken aback.

“Obviously, this requires training. You have to train your employees and your stakeholders, understanding issues of bias, data privacy, transparency, ethical responsibility — this needs to be done throughout the company, at every layer.

“You can also implement impact assessments, so essentially you’re going to want to have a system to identify and mitigate ethical risks in AI projects to ensure alignment with internal and external standards. This can be done either by your ethics team or automatically — there are tools for this.

“You want to have diversity in your teams as much as possible, and not just because of DEI standards but also generally because they will have the ability to see what others may not see. They’ll be able to ensure more equitable outcomes because of the diversity in their perspectives. I have some friends who like to say it’s diversity of perspective and thought that matters most, but ultimately diversity of thought is fostered by having a diverse crew.

“You also want to do continuous monitoring because it’s not because you intended for something to be ethical in the beginning, and it’s not because at first it seems like things are going well, that it can’t get worse down the line or that it can’t have unexpected, unintended consequences.

“Finally, in your KPIs you can have ethical AI metrics, you can have benchmarks for ethical performance, you can have autonomy benchmarks as well. If you put these into your KPIs, you will be able to keep yourselves accountable.”

Author

Related Articles

Back to top button