To what extent should PR practitioners declare using AI technology in any aspect of their work? Will all emails, press releases, or other related content need a declaration of what and how AI technology has been used in its creation?
In 2020, the Chartered Institute of Public Relations’ (CIPR) AI in PR Panel published the Ethics Guide to Artificial Intelligence in Public Relations to support the industry with these future-facing dilemmas and considerations. The key areas and issues highlighted in that report remain entirely valid – namely, the use and application of AI; social change; the impact on the nature of work; and privacy controls and transparency issues. The arrival of a new generation of generative AI and machine learning technologies – available at scale and inexpensively – only brings these issues into sharper focus for the PR practitioner of today.
The ‘explainability’ of AI
The algorithms used in AI can be differentiated into white-box and black-box approaches. White-box models provide results that are understandable for experts in the domain. Black-box models, on the other hand, are extremely hard to explain and often can hardly be understood, even by experts.
Should all AI technology be explainable? How much understanding does a PR professional require to provide credible advice on the use or otherwise of AI in any relevant situation?
Even here, there are no simple black-and-white answers. According to The Economist magazine: “Whether people really need to understand what is going on inside an AI is less clear. Intuitively, being able to follow an algorithm’s reasoning should trump being unable to. But research by academics at Harvard University, the Massachusetts Institute of Technology, and the Polytechnic University of Milan suggests that too much explanation can be a problem.”
“Employees at Tapestry, a portfolio of luxury brands, were given access to a forecasting model that told them how to allocate stock to stores. Some used a model whose logic could be interpreted; others used a model that was more of a black box. Workers turned out to be likelier to overrule models they could understand because they were, mistakenly, sure of their own intuitions. Workers were willing to accept the decisions of a model they couldn’t fathom, however, because of their confidence in the expertise of the people who had built it. The credentials of those behind an AI matter.”
Trust and reputation are paramount – concepts at the heart of good public relations. These are questions that are already facing the profession with the Institute’s latest research – ‘Artificial Intelligence (AI) tools and the impact on public relations practice’ – highlighting that 5,800 technology tools with potential applications already exist. These cover a wide range of areas including research, planning, measurement, content, data and insights, management, reporting, and workflow.
Is AI in PR the future?
One of the most important roles that PR professionals can (and will) play in the future will be to not only understand how to use AI in their own work – but to provide organisations everywhere with sage counsel on how to interpret the reputational implications of AI usage that may impact any or all relevant stakeholder groups. Navigating this new terrain will involve an expansion of the PR professional’s current skill and knowledge set. In some ways, there has never been a greater need for PR practitioners who can provide the insight and guidance necessary to make this a reality.
Of course, all of these AI technologies (sadly) have the potential to be weaponised for morally dubious purposes to create dis– and misinformation at unimaginable speed and scale. The ethical, legal, and reputational issues around this alone have the potential to keep businesses – and the PR professionals they employ – very busy.
The AI industry has its own ethical problems to deal with
Some firms in the AI sector stand accused of exploiting cheap labour for data labeling, which is essential to machine learning. Is this the equivalent of the issues of using child labour in clothes manufacturing, for example?
And there are potential legal issues involving using generative AI technologies. At the beginning of 2023, there were already a number of legal cases on copyright issues such as Getty Images suing Stability AI, creators of popular AI art tool Stable Diffusion, over alleged copyright violation, and a trio of artists have launched a lawsuit against Stability AI, Midjourney, and artist portfolio platform DeviantArt, which recently created its own AI art generator, DreamUp.
There’ll always be a place for good governance, leadership, and management of AI resources in public relations, irrespective of the sophistication of the tools. And that requires informed human intervention. There’ll always be a wider role for those questioning voices at the heart of organisational governance who ask those guardianship questions about the implications of technology use. Just as an ethical medical or legal practice is never left solely to the medics and lawyers, technology isn’t something to be left solely to the technologists.