Future of AI

The dark side of AI 

Artificial intelligence (AI) has quickly become one of the most useful and versatile productivity tools of the modern age. But is there a darker side to the technology we will all become more keenly aware of? AI bias has been common knowledge for the last five years – AI’s guilty open secret. But with the ChatGPT data input scandal hitting the news headlines globally, generative AI has come under scrutiny, raising broader questions surrounding the ethics of AI, its application, and how these escalating problems can be dealt with. 

The open secrets and unacknowledged problems of AI

Toxic data input practices

ChatGPT is considered to be one of the most remarkable tech innovations of recent times. Capable of generating text on almost any topic or theme, it is viewed as just about the most powerful AI chatbot around. In January 2023, a Time investigation uncovered the practices used to make it so. These included the data labelling process, which involved an enormously underpaid (less than $2 an hour) Kenyan workforce sifting through and labelling reams of – often highly disturbing and damaging – data. Without support, without thought for their well-being and this won’t be the only instance. 

Many AI companies start out by outsourcing their data labelling, and it’s possible companies are entirely ignorant of the labelling processes and the conditions of the workers. While it is hoped that the leadership behind ChatGPT were unaware of what was being done in their name, lack of due diligence can’t be viewed as a reasonable excuse This is something that must be addressed. But data input isn’t the only concern. 

Bias

Despite the fact that AI was originally positioned as a way to remove the threat of personal bias from a range of decision-making processes, AI bias has been the centre of a huge amount of attention in recent years. Embarrassingly high-profile cases of AI bias have hit global headlines. From Amazon’s sexist recruitment AI to an American healthcare algorithm used to make decisions about more than 200 million people – that was subsequently found to discriminate against black people. Because AI relies upon the use of human labelled data, all AI systems are at risk of becoming biased. And right now, the only way to combat this is through the introduction of explainable AI (XAI), which enables decision-making processes to be questioned and faulty processes to be identified and corrected. The problem is that this approach is still widely unadopted. And it’s not the only concern.

The ethical application of AI

AI is becoming increasingly more advanced. In 2022, the Lancet reported that AI could determine a person’s race from an X-ray. Something that even the most experienced doctor would be unable to do. But how can we ensure that that data is used properly and ethically? If this advanced AI was combined with the faulty AI of the previously mentioned US healthcare algorithm, could we find ourselves in a position where a black patient is discriminated against before they even meet a doctor? Could lives be put at risk?

Even moving away from healthcare and bias, AI carries a whole range of ethical concerns. If I use my bot to do the first phase of interviews, but one interviewee has a speech impediment or is heavily accented, there would be an ethical duty to interview that candidate in a different way. A human could make that decision. A bot, programmed to expect ‘normal’, would simply dismiss that candidate as unsuitable.

Then you have the potential of phishing operations at large scale. While there are restrictions in place by programmers around toxic content on ChatGPT, there are (and will be) ways to subvert those barriers.  Anyone with an Amazon account can download and run their own generative AI system.  Until we have the means to address each of these issues, AI needs to be handled with care. 

What might the future of AI really look like?

Artificial General Intelligence (AGI) has always been seen as the ultimate aim. Fortunately, it is still a long way off. But we’re at a point where we have some feeling of sentience in our interactions with AI, and that raises questions about where we should allow the technology to go. If “dumb” ChatGPT has the potential to be entirely good or entirely evil, how do we prevent a dystopian future of rogue machines operating for their own good, and not humanity’s?  These are questions raised by science fiction writers for a long time, most notably Isaac Asimov with his Three Laws of Robotics, but we seem to be diving headlong into a shark tank of science facts.   

You can’t build a nuclear bomb at your kitchen table, anyone can purchase the components to create highly sophisticated AI. It doesn’t take much to create something for nefarious purposes and to do so with a significant degree of anonymity. That’s something we have to consider when thinking about AI’s future. 

Regulation is the obvious first step but it’s difficult to know how that can be managed. There have been some tentative movements – such as GDPR’s requirement that all automated decisions should be explainable, and the new EU AI Act aimed at regulating “high-risk” AI. But comprehensive, potentially intrusive, regulation – the active monitoring of data centres, the forced compliance and intervention of tech producers – is still at a considerable distance. 

AI tech is out there now. No matter how scary it might be, there’s no putting it back in the bottle. And there are many reasons why we wouldn’t want to. Speech tech, like natural language processing (NLP), is saving companies billions through fraud detection while supporting compliance and identifying the vulnerable. Endless labour-saving processes are in place across sectors, thanks to AI and intelligent automation. But to secure a safe as well as a productive future, we need to not only be aware of AI’s limitations but wary of its dark underbelly and make changes as we move into the future. 

Author

  • Nigel Cannings is the CTO at Intelligent Voice. He has over 25 years' experience in both Law and Technology, is the founder of Intelligent Voice Ltd and a pioneer in all things voice. Nigel is also a regular speaker at industry events not limited to NVIDIA, IBM, HPE and AI Financial Summits.

    View all posts

Related Articles

Back to top button