No technology is perfect, and by now we’ve all witnessed some of the mistakes that artificial intelligence (AI) tools, such as large language models (LLMs) can make. With any use of technology, it’s important to learn from the past, but AI is moving so fast that we need to learn very quickly. AI is not just a tool; it’s a rapidly evolving field that challenges us to adapt and improve continuously.
Using AI doesn’t mean we can switch our brains off. Quite the opposite, in fact. It means we need to think about what we’re doing and saying, and the benefits and limitations of this technology, which has advanced rapidly over the past few years.
It’s progressed so quickly that lessons from the past are arguably even more important than ever before. If we’re to take advantage of advancements in AI, then we also need to collaborate and share these lessons. And we need to openly discuss what we’ve learnt and how we can make sure we maximize the benefits of the technology while minimizing the risks it inevitably brings with it.
We’ve all heard the world’s largest technology companies promising us that AI technologies will make us more productive and efficient. AI will be pervasive and transform the way people work in every sector, but we shouldn’t get into automatic gear. AI is a copilot and that means we are the pilot; we need to take control, set the direction and make decisions. This means understanding the technology at a deeper level and ensuring that it’s aligned with our goals and values.
So, anyone planning to implement and run AI for any purpose needs to think about what they want to achieve, and plan very carefully exactly what they intend to use AI for. And – this is perhaps different to many technologies – people really should understand how it’s been trained and what its limitations are. Gaining this understanding can help in designing systems that are robust, reliable, and aligned with ethical standards.
I’ve seen and read about many real-world examples that show the power of AI. And more that demonstrate how projects can quickly – and surprisingly – spin out of control. This is often because biased datasets are employed to train the AI models or when appropriate policies and safety procedures haven’t been put in place.
Recruitment bias
Amazon was one of the early adopters of AI applications, incorporating machine learning into its hiring processes from 2014. The AI system analyzed hundreds of applications each week, leveraging patterns identified from a decade’s worth of data. Unfortunately, the system developed a gender bias, favoring male candidates over female ones. Efforts to rectify this bias were unsuccessful, leading Amazon to discontinue the project in 2017.
Facial recognition flaws
Remarkable progress has been made in the field of facial recognition, however, it’s well reported that AI has a significant weaknesses in this area too. Research indicates that facial recognition systems have a false positive rate that is 100 times higher for identifying black people compared to white people, leading to serious civil rights concerns. Machines lack the ability to understand the human experience and often make assumptions that are clear to humans but not clear to algorithms or machines. The primary challenge is identifying and addressing these inaccuracies before they result in significant issues.
Assisting the medical professional with accurate diagnoses
AI’s pattern recognition abilities have become a remarkable asset in medicine, particularly in diagnosing and treating eye and skin conditions. AI can match or even exceed dermatologists in identifying malignant melanomas from images of skin lesions. Additionally, AI has proven effective in helping medical professionals detect pneumonia through the analysis of chest X-rays. Overall, AI has contributed to faster and more accurate diagnoses, improved patient care, enhanced operational efficiency, and personalized medicine.
Although they mostly adhere strictly to their programmed instructions, AI systems can sometimes behave unpredictably. Their approach to tasks can differ significantly from human methods. For instance, when engineers designed an AI tool to maximize points in a racing boat game, the tool didn’t race like a human-controlled boat would at all. Instead, it found alternative ways to score points, even if it meant repeatedly crashing the boat, and finishing last.
Learn from the past, plan for the future
So, although many organizations are rushing towards AI, it’s vital to understand how the technologies are trained, what they are capable of, and what their vulnerabilities are. I advise everyone to learn the best ways to work with the ‘copilots’ and share ideas and lessons in order to maximise results and avoid the pitfalls.
Perhaps most importantly, if you are developing your own AI, particularly if you are providing it to others to use, ensure that you have ways of correcting its learning when bad or inaccurate data has been introduced. Otherwise, you risk a future exponential error rate. Mark Cunningham-Dickie is Principal Incident Response Consultant at Quorum Cyber. With over 20 years of experience in the technology industry, including more than ten working in technical roles for law enforcement and other government funded organisations, Mark has worked on hundreds of cyber incidents in his role in Quorum Cyber’s Incident Response (IR) team.