Ethics

How can AI be used responsibly?

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to the AI Experience newsletter and join 50k+ tech enthusiasts.

Recently, Michelle Bachelet, the U.N. high commissioner for human rights, called for countries to ban the use of artificial intelligence (AI) that imperils human rights. This recognises a key dilemma of the technology: the ability for misuse, whether intentional or not, which results from a system that learns as it goes.

However beneficial the technology might be, there are still powerful concerns surrounding its implementation by governments and businesses. A Pew Research Center study found that less than half of respondents in the UK, US, Canada, Germany and France, among others, believe that the development of AI has been good for society.

Yet, it’s clear that AI is here to stay and has many important benefits, from fraud detection in financial transactions to possible cancer detection in CT scans. So, the question becomes one of transparency and ethical use.

Removing bias

Problems such as “black box” algorithms continue to hamper AI. The black box problem refers to a lack of understanding in how algorithms make decisions. Shedding light on these processes is vital to building trust in the technology and organisations that use it.

Public safety is considered a leading area of potential for AI, as its benefits could be exponential in the fast-paced nature of first responder work. However, prior to its use, agencies must first develop an ethical framework to ensure understanding, accountability and acceptance.

For law enforcement in particular, it is vital to ensure a fair and equitable process, free of bias. As Bachelet said, “The risk of discrimination linked to AI-driven decisions – decisions that can change, define or damage human lives – is all too real.”

For AI, the effects are often a result of the way the systems have been set-up. Bad data can yield bad AI and bad results. Organizations need to ensure transparency in the data, the way the AI is trained to act based on that data and how the AI reaches conclusions when used in the real world. And since the emergency services sector is one of rapidly changing circumstances, the process must be viewed as continuous as new data and adjustments are introduced.

All of this is critical to ensuring the AI works as intended, but equally important is the intended use.

Responsible use

For emergency services, areas of use could include mining vast amounts of real-time data for quicker decision making regarding the safety of people and infrastructure, supporting optimal emergency services workforce mobilisation and monitoring the mental wellbeing of staff based on the volume and frequency of high-stress incidents they’ve worked.

For these types of uses, AI should never be the sole decision-maker. Where lives are on the line, leaving responsibility up to machines is negligent and could, at times, be dangerous. Instead, there must always be a human involved in all decision-making. For these sensitive uses, “assistive AI” is the answer. AI can augment, but never replace, human judgement and intuition when snap decisions are required.

In an emergency services control room, assistive AI can act as a second set of eyes, helping personnel make better decisions by finding connections among large amounts of information. While hundreds or thousands of calls for service may come into a control room each day, the ability to analyze data immediately to inform decisions is limited. Staff do the best they can to capture, process and convey information under pressure, but deeper analysis tends to happen after the fact, to inform and improve future performance. Here, AI can play a positive role.

For example, call takers and dispatchers dealing with dozens of calls during a major traffic accident might miss that a lorry involved in a crash was described by one witness as a “tanker,” which could indicate the presence of potentially hazardous materials. Having such information may be crucial to police, fire and ambulance crews dispatched to the scene. An AI component reviewing incoming data could flag that detail and alert all dispatchers, who could then assess the situation and inform first responders, so they could take necessary precautions.

Heeding the call for responsible AI

The use of AI is becoming increasingly common, with no signs of slowing. Given the vast opportunities for AI to do good, yet the concerns that it may do harm, governments and public service providers must ensure that AI deployments are well thought-out, with serious consideration for how they are designed and used and with sufficient buy-in from stakeholders.

There can be no acceptance without trust. And trust in AI must be earned. It’s therefore up to those who understand its benefits — from technology proponents to vendors to users — to ensure transparency and ethical uses are built-in to AI deployments prior to implementations. Care at the beginning can prevent problems later.

(Featured image- ©christian42 – stock.adobe.com 116726629)

Author

  • Nick Chorley

    Nick Chorley is the director of public safety and security for Hexagon in EMEA. He has worked for more than 30 years supplying command and control systems to emergency services and speaks regularly on public safety IT topics.

Related Articles

Back to top button