EnviromentalFuture of AI

Rethinking ESG credentials for the Age of AI

In recent years, the context in which businesses operate has been deeply transformed by the conjunction of various forces, including climate change, social justice movements, and shareholder activism. Furthermore, the Covid-19 global pandemic has aggravated underlying and longstanding failures regarding equal access to employment opportunities.

These forces have put pressure on companies to favour long-term, sustainable value creation for the benefit of all stakeholders, employees, consumers, and citizens at large; effectively moving away from a short-term, narrow focus on shareholders. This development has been accelerated by the rapid adoption of Environmental, Social, and Corporate Governance (ESG) performance indicators by investors to measure the sustainability impact of their investments. In 2021 alone, ESG funds attracted nearly $120 billion in flows. As a result, it has become imperative for business executives to create strong ESG propositions.

In parallel, the adoption of artificial intelligence (AI) is gathering pace across a variety of business functions from production through to service operations, marketing, sales, human resources, and risk management. While AI adoption is highest in the tech, telecom, financial services, and manufacturing sectors, according to the McKinsey Global Survey 2021 there has been an increase in nearly every industry, with a significant uptake at companies headquartered in emerging markets (e.g. China, North Africa, and the Middle East).

Effective AI governance should be a key component of any strong ESG proposition

The combination of both  trends – heightened ESG pressure and the growing impact of AI on business and society – suggests that effective AI governance should be a key component of any strong ESG proposition. A concrete example of this is the CEO Action for Diversity & Inclusion (D&I) Pledge  – the largest CEO-driven business commitment to advance diversity and inclusion within the workplace – which has been signed by 2000 CEOs and includes most Fortune 500 companies. These same companies are also increasingly using AI to support their workforce decisions – from recruitment and talent management to high-performing employee retention. Therefore to deliver on their commitments, these executives must ensure that deployed AI solutions are purposely designed and continuously monitored to effectively enhance workplace D&I.

The same reasoning applies to environmental concerns. For instance, utility companies are increasingly deploying AI solutions for demand forecasting and power grids optimisation – partly in the hope of reducing electric power greenhouse gas emissions, which account for almost 25 percent of total greenhouse gas emissions worldwide. However, this ambition can only be realised if the AI solutions deployed actually perform as expected. That’s where things can become tricky.

AI creates unique governance challenges

For all the progress that AI has made in the last decade, scaling robust and trustworthy AI solutions still remains challenging. For one thing, AI systems evolve with data and use, which makes their behaviours hard to anticipate; and when they underperform, they are harder to debug and maintain than classic software. This can be particularly problematic when they are deployed in a rapidly changing environment such as a global pandemic. Indeed, massive market volatility and abrupt consumer behavior changes have led to significant drops in AI models’ performance across the retail, manufacturing, and financial services industries. Considering how electricity demand also oscillated between cycles of quick drop and fast rebound in line with lockdown measures, it would not be surprising to observe a similar phenomenon in this sector.

Second, without proper oversight, AI may replicate or even exacerbate human bias. This is particularly problematic in high-stake domains like recruitment where incidents have been reported. In 2015, a study demonstrated that women are less likely to be shown ads for high-paid jobs on Google. Three years later, Amazon reportedly removed an internal AI-recruiting tool that was biased against female candidates.

These controversies have fueled concerns over AI bias in employment and led to intensified policy activity. The NYC Council has passed legislation that requires vendors of AI-powered hiring tools to obtain annual third-party “bias audits” while the EU AI Act, a comprehensive regulatory proposal from  the European Commission,has identified this area as high-risk and thus subject to quality management and conformity assessment procedures. This development demonstrates how AI risks have become material risks.

The need for AI Quality frameworks

In this context, executives are facing a new challenge: how can they make their companies more sustainable while maximizing the benefits of AI?

The short answer is by implementing sound AI Quality frameworks. Here the term “AI Quality” refers to the set of observable attributes of an AI system that allows one to assess over time the system’s real world success. Real world success includes the value and risk from the AI system to both the organisation and the broader society.

The dimensions of an AI Quality framework may vary at the margin but at its core, it must include four key categories:

  • Model performance: Assessing AI systems based on their accuracy on benchmark datasets is a rather narrow approach to performance assessment. Instead, sound assessment should also include considerations about model stability, conceptual soundness, and robustness.
  • Data quality: It is now a well-known fact that high-quality data is essential to building trustworthy AI models. What is often underestimated is the effort needed to collect, clean, and process high-quality data. Checking for missing data and data representativeness will also help build better models.
  • Operational compatibility: AI models are never used in isolation. They are part of larger business and organisational structures. Therefore, facilitating their integration through documentation, model function and collaborative capabilities is key to driving adoption.
  • Societal impact: AI systems are value-laden and as such, executives and board members must ensure that their behaviors reflect what their company stands for, particularly their ESG commitments. Hence the need to have observable attributes of societal value and risk, such as transparency, fairness, privacy, and security.
AI Quality Framework

Moving forward

Having quantifiable measures of the real-world impact of deployed AI solutions by using AI Quality frameworks would go a long way towards mitigating ESG risk in AI systems. Yet, implementing such frameworks can appear daunting at first glance and executives may instead opt to limit their use of AI in order to avoid any potential adverse impact on their ESG credentials. But considering the immense potential of AI for business growth, in my view this would be a self-defeating strategy.

Fortunately, a variety of processes and tools are available to help them ensure that their data science teams design and deploy high AI quality solutions. They should use them now – especially since the pressure on companies to become more sustainable while ensuring that their AI solutions are trustworthy is only likely to increase.

Author

  • Lofred Madzou

    Lofred Madzou is an expert on AI governance. Currently, he’s the Director of Strategy and Business Development at Truera – an AI Quality platform that explains, debugs and monitors machine learning models, leading to higher quality and trustworthiness. He’s also a researcher at the Oxford Internet Institute where he focuses on the governance of AI systems through audit processes. Before, he was an AI Lead at the World Economic Forum where he supported leading companies, across industries and jurisdictions, in their implementation of responsible AI practices as well as advised various EU and Asia-Pacific governments on AI regulation. Previously, he was a policy officer at the French Digital Council, where I advised the French Government on AI policy. Most notably, I co-wrote chapter 5 of the French AI National Strategy, entitled "What Ethics for AI?". He holds an MSc in Data Science and Philosophy from the University of Oxford.

    View all posts

Related Articles

Back to top button