The COVID-19 pandemic has further underlined the importance of trust.
In this atmosphere of uncertainty, where it can be difficult to discern disinformation from accurate data, people are instinctively gravitating toward the individuals and organizations they trust the most.
So, how can organizations build and maintain trust in these times?
One strategy is to target new technology, including artificial intelligence (AI) systems.
AI can be very effective at enabling organizations to earn and sustain the trust of their stakeholders.
But this effectiveness doesn’t happen by accident; it happens by design.
An AI system can only help to build trust if trust itself is designed into the AI system – right from the outset.
Designing trust into AI
To develop trusted AI – and, frankly, what’s the point in developing any other kind? – an organization needs to consider the potential ethical, social and technical risks of the system and what other impacts it is likely to have.
It is also essential to understand the relationship between AI and data.
An AI system will only ever be as good as the data from which it learns.
Since accountability is the foundation on which trust is based, an individual should ultimately be responsible for the decision framework used in an AI system and its outcomes.
In particular, governance and control structures are key to developing trusted AI systems that consumers and businesses want to use because they can trust them.
In fact, a recent EY study – Tech Horizon: Leadership perspectives on technology and transformation – addresses the concept around trust.
The study highlighted the six habits of digital transformation leaders (companies that are harnessing technology to generate better financial performance than their peers).
One of these habits is “activating governance plans for emerging tech” – in other words, establishing standards and policies around governance, privacy, and the ethical use of technology.
Quantifying the risks of AI
To prevent trusted AI from being undermined, one must address the risks that could potentially undermine trust.
These are design risks (for example, misalignment with business strategy or the subversion of human agency); data risks (such as low-quality or biased data); algorithmic risks (for example, “black box” algorithms that have unknown decision structures); and performance risks (for example, systems operating beyond their technical capabilities).
Organizations that design trusted AI systems are committed to anticipating, managing and measuring the risks associated with the technologies.
They gain insights from tools such as data-based risk analytics, continuous monitoring, and supervised response mechanisms.
Then they adapt their systems to reflect user feedback and model validation. Significantly, their AI developers collaborate closely with their risk professionals – both while the system is being developed and once it has gone live.
Leveraging trusted AI as a competitive advantage
Today many organizations are investing heavily in AI in the belief that it will enable them to better serve their customers, operate more efficiently and gain market share.
In fact, almost half (47%) of the corporates surveyed for the Tech Horizon research said that AI accounted for the largest share of their technology investment over the past two years.
AI is most likely to be a source of competitive advantage; however, this is only possible if it is aligned with an organization’s stated purpose and used in ways that enhance its brand, products, services and stakeholder experience.
Fundamentally, AI will only be trusted if it is seen as a force for good.
Health is a great example of a sector that offers organizations opportunities to gain competitive advantage while using AI to make a positive difference.
In the case of COVID-19, for instance, AI – in combination with a comprehensive global data set and privacy guidelines in place – is already being used to improve the diagnosis and treatment of individual patients, further drug-discovery research, track community transmission, and model infection spread.
In fact, EY is working with the Massachusetts Institute of Technology (MIT) and several other organizations on a platform that allows public health professionals to track COVID-19 hotspots and movement.
Another example that is very topical right now is the application of AI to food production and the related supply chain. AI is already being used in agriculture to improve soil health and detect plant diseases and pests.
Going forward, however, AI systems could help to monitor food safety throughout the supply chain, improve the cleaning of processing equipment, and provide transparency to consumers around the provenance of goods.
AI can also help build trusted systems that enhance the enterprise resilience of organizations in every sector.
By automating their IT infrastructure, organizations can improve the speed of their systems, operate them more cost-efficiently and gain greater control and visibility over how they are being used.
Multiple automated tasks can be used to execute larger workflows or processes.
Furthermore, AI methods for IT operations – in combination with machine learning – allow organizations to improve their situational awareness and response, identify patterns, and gain insight into applications and business relationships across hybrid infrastructure.
As a result of COVID-19, we are now entering a world that will present new challenges and new opportunities – some of which relate to AI.
In this world, where so much is unknown, trust will be crucial to organizational survival.
Organizations with AI systems that are trusted by businesses and consumers alike will have a clear advantage over their competitors.
In addition to having the agility to respond to emerging opportunities and threats, they will benefit from having loyal stakeholders. Trust is the foundation for future success – and trust only happens by design.