Future of AIAI

AI rewriting the rules of reputation risk management

By Tal Donahue is a director at communications firm Infinite

The misuse of AI is now the number one reputational risk facing brands.  

That’s according to the latest Reputation Risk Index from the Global Situation Room, which ranks harmful or deceptive use of AI as a more severe threat to brand perception than issues such as whistleblowing,  price fixing, breach of contract. 

PR is the critical function for building and preserving reputation – an asset worth $billions and which is estimated to represent almost a third of FTSE 100 and S&P 500 company market caps. In fact, for some of the biggest players in the AI market – such as Nvidia, Microsoft and Apple – reputation contributes more than half of overall company value (Alphabet is not far off that figure).  

This huge growth of reputation as a source of economic value is a critical driver as companies race to lead, or keep up with, the AI revolution.  

But, conversely, as the pace of AI adaption accelerates bumps in the road have the potential to cause reputational ruin… 

PR’s AI adaptation 

In this context, the role of PR and the nature of PR campaigns is evolving. 

New AI laws, such as the EU AI Act, are being rolled out and forward thinking PR advisers will  need to have a close handle on AI governance; ranging from explainability of AI assisted decision making, visible compliance with provenance standards and disclosures, and ensuring that the pursuit of competitive advantage doesn’t lead to ‘AI washing’ in communications (an issue which is already leading to legal action).  

At the same time, PR teams and workflows are adapting. AI is transforming the PR value proposition, putting a premium on human intervention and, critically, that very non-machine faculty of imagination. Imagination both in terms of creative problem solving, but also problem identification – including, not least, the various reputation risks that AI itself might pose.  

Rising reputation threats 

Leadership teams are under pressure to innovate quickly while managing the necessary capital expenditures and the broader organisational and cultural impacts of business transformation. 

This tension potentially leads to significant branding and communication challenges, especially in markets where differentiation is difficult.  

Companies face reputational risks – not just from falling behind or falling victim to AI wielding bad actors, but also from making missteps with AI themselves which could undermine hard-earned stakeholder trust. 

From stakeholder fatigue to escalating cyber threat 

Firstly, as AI adoption becomes more ubiquitous there will be fewer opportunities to leverage its use as a differentiator. Scepticism and fatigue are running high.  

Secondly, as awareness and understanding of AI – and its risks and limitations – grows, missteps and misapplications will not be tolerated. This is particularly true of AI misuse which contravenes an organisation’s existing ethical standards and values statements, and / or falls foul of emerging regulation.   

Legal action, regulatory scrutiny, talent attrition, ‘bad press’, and even the deterioration of financial performance may result. All of these, and more, can threaten reputation and erode brand value. Indeed, a recent Deloitte survey of global businesses found that reputation damage was seen as the most severe potential outcome of a failure to follow ethical tech standards – including in relation to the deployment of AI.   

Thirdly, bad actors will deploy AI powered techniques to attack brands and undermine trust. We have seen this numerous times in the media in recent years, including political figures being the target of AI-generated smear campaigns and brands subjected to cyber attacks made more sophisticated by the application of AI models.  

Mapping reputation risk 

There are broad range of issues which leadership and risk teams will need to consider in order to ensure that reputation is effectively protected as AI tools proliferate – both within and without organisations.    

These can be grouped into three risk areas – input risks, operational risks and output risks   

  • Input risks  

Input risks relate to how AI systems are built, programmed and powered.   

They include the data sets that large language models are trained on and to what extent they can be trusted to be free of issues such as engrained bias and / or intellectual property infringement; the energy used by AI data centres and the associated carbon cost which may stymie progress against stated sustainability goals; and the evolving landscape of regulation and compliance which will enforce standards across AI technologies.   

  • Operational risks 

Operational risks refer to issues resulting from the management and organisational adoption of AI tools, as well as the readiness to respond to external AI vectors. 

This includes the reliability of, or indeed over reliance on, tools and the disruption that may be caused in the event of system downtime; cybersecurity infrastructure and the preservation of confidential and sensitive data which may be put at risk through unchecked AI deployment; and organisational commitment to change management, not least in relation to upskilling workforces to use AI safely and effectively while also managing broader stakeholder expectations transparently.   

  • Output risks 

Output risks are particularly related to the creations of generative AI tools, but can also refer to broader outcomes which are achieved or informed by any AI system.   

These risks are broadly understood and include issues such as AI hallucination; black box decision making and the loss of trust that can result from opaque use of AI in business processes – particularly as they relate to human capital, such as in recruitment; and the challenge of retaining organisational culture and value-adding human to human relationships internally and externally.  

Communications and marketing teams will also be increasingly mindful of how the content they are producing and brand footprint they are curating – media coverage, white papers, web pages etc. – aid discoverability by the AI tools which their stakeholders are increasingly relying on as sources of information.  

Understanding what an AI tool is likely to say about your brand, and deploying tactical communications that help your brand show up when a target question or prompt is plugged into, say, ChatGPT, Google AI Mode or Perplexity, is now core to the reputation building mission.  

Safeguarding reputation value 

Reputation management forms an essential part of any corporate brand building programme as well as crisis, resilience and continuity plan. The new risks posed by AI – as well as the many communications opportunities it offers – will need to be carefully assessed by leadership, PR and risk teams to ensure that AI is effectively used to create, and not threaten, reputation value.   

Author

Related Articles

Back to top button