Ethics

Building ethical AI products to inspire trust with your customers

The evolution of artificial intelligence (AI) as a technology and its increasing accessibility to organisations without specialist skills is already having a positive effect in many sectors. AI can be utilised to enhance and automate the development and manufacturing process to make products better and more cost-effective to produce. AI used with human interaction can make customer services smarter and more streamlined. Mundane and repetitive tasks can be handed over to AI-powered processes, freeing up personnel to concentrate on more productive activities. AI systems can be used for predictive maintenance to keep operational costs low. In healthcare, there is huge potential for AI to improve patient care and assist clinicians in diagnosis and treatment plans.

According to McKinsey’s 2021 global survey on the state of AI, the prospects of the technology are strong. Nearly two-thirds of respondents said their companies’ investments in AI will continue to increase over the next three years. However, there is a downside. Recent news headlines of AI applications failing to continue to put AI under scrutiny and raise the question of how much AI can really be trusted.

Building trust

When AI is under scrutiny, the first thing to try to understand is how the AI system is built – what data is used to train the AI system, what algorithms are used, and how is the system tested and validated? Most AI models are ‘black boxes’ – the inputs and operations are not visible either to the user or to any other party who may have an interest. If there are questions around the predictions made, the decisions taken, or identified errors, with a black box system it is not possible to find the answers. This creates doubt, confusion and mistrust.

For AI systems to be trusted, they need to be transparent, and this means building AI implementations that are explainable, and which utilise algorithms that are not black boxes in nature. Explainable AI can help businesses verify machine learning models and identify the reasoning behind both their direct and indirect impacts on operational processes. Specific model decisions can be analysed, and machine leading models debugged and improved to discover better insights. To achieve explainable AI, systems need to be built according to the AI trust framework – built on the principles of reliability, safety, transparency, and responsibility and accountability.

Transparent, explainable AI does not on its own create trust. An AI system is by its very nature a continuously evolving structure. It will change and adapt according to the data on which it is trained, so in order to ensure the system remains free of bias as it evolves, constant ‘retraining’ is needed. This needs to be built into the business model, and reflected in the KPIs, so there is a clear audit trail.

Transparency engenders confidence

No AI model or system can be 100 percent accurate or reliable. There will always be a margin of error and misinterpretation unless all predictions and actions are overseen by a human, which is clearly impractical. It is therefore important to understand where the model decisions will be entirely accurate and where there could be opportunities for error. This transparency engenders a level of confidence in the implementation, reinforced by the accessibility of data should an explanation be required.

Safety is a keyword related to AI and is not used as often as it should be. Like any other data network, AI systems can be vulnerable to attack and manipulation. The security of the system and the data it uses should be a paramount consideration, but safety also means ensuring the system behaves fairly. In AI this means ensuring there is no bias, and that the insights and outcomes from the AI system are fair to all users. Transparency means that any decisions can be interrogated, and any bias removed – something not possible with black box models. The human perspective is essential here – this means ensuring the real-world context is always considered when designing AI models to enable the best user experience.

Be responsible – be accountable

Building ethical AI systems means being responsible and accountable. AI systems must be explainable, with full transparency at every stage of development. Boundaries should be clearly defined, with both the short and long-term benefits and potential impacts highlighted. Make sure your customers understand what they are using, and how it has been created. The key element of explainable AI is that it can and does resolve bias and omissions within the system. It enables users to understand the route taken to the decision by the IT system or algorithm.

AI is still a relatively new technology, finding its place in the business operations of the enterprise, as well as in many other sectors. For AI to mature into an accepted and integrated part of how processes work and services are delivered, it must gain user trust. There must be close collaboration between all teams involved, from the data scientists to those responsible for delivering the completed product. Everyone who is part of the project needs to understand the social and ethical implications of why and how the AI system is developed and deployed, and to be able to fully justify any decisions that are queried. This transparency assured data privacy and robust model security will result in a positive user experience, which ultimately will build AI trust and digital trust.

Author

Related Articles

Back to top button