As the threat posed by disinformation continues to spread, cybersecurity is increasingly turning to AI as a tool for truth rather than manipulation. Disinformation is a threat to society as the deliberate promotion of false information and spread of manipulative narratives undermines public trust and democratic decision making. While generative tools make it easier for anyone to create fake content, advanced systems driven by AI are also well suited to monitoring potentially deceptive content and detecting false information. Several research programs are now developing sophisticated machine learning algorithms that are able to uncover cyber threats, predict deceptive behavior and spot disinformation such as fake news. AI is strengthening a wide range of cybersecurity domains and now with the ability to analyse patterns, language and context, its benefits can be harnessed and be directly applied to tackling disinformation.
Uncovering Cybercrime on the Dark Web
According to cyber security experts, the number of phishing attacks doubled during the latter half of 2024 and other deceptive threats to digital safety continue to threaten national security, undermine democracy and engender public distrust. The dark web is notorious as a centre of activity for cybercrimes but the shady websites where hackers market their services are increasingly resembling legitimate sites. While it is possible to use dark web monitoring to improve the security of personal data, larger corporations including governments and enforcement agencies are now using AI to uncover potential threats originating from the hidden depths of the internet. The technology is capable of categorizing forums on the dark web and with language models to summarize potential threats, organizations are then able to uncover high-value intelligence. With extra enhancements to algorithms including ‘in-context semantic search’, AI is able to quickly find extremely relevant results that would have been difficult to locate with a conventional search using keywords.
Detecting Deception in High Stakes Interactions
As well as quickly and efficiently monitoring language and content, AI has been found to be highly effective in detecting lying in high stakes interactions where taking a strategic approach is beneficial. Researchers at the Rady School of Management in San Diego have found that machine learning algorithms can identify deception better than humans.
The research was based around the British TV game show ‘Golden Balls’ where winning strategies involved communication and deception. The end of the game created a prisoner’s dilemma as contestants could win more money if they trusted each other. Over 600 participants in the research were shown episodes of the program and asked to predict the behavior of contestants. They only achieved an accuracy rate of around 50% whereas AI algorithms were correct in their predictions almost three quarters of the time.
Identifying Fake News on Major Platforms
One of the applications of machine learning algorithms that accurately detect lying will be in spotting fake news and reducing the amount of fictitious content on major platforms. While much fake news is widely shared as misinformation, it is initially generated and spread by someone who is well aware that the information it contains is false. Fake news not only erodes public trust but it can also undermine national security by influencing opinions and actions. To combat online disinformation and remove this threat, university academics in the UK have developed an AI tool that is 99% accurate at identifying fake news. The AI model that reviews news content and can judge whether it is genuine or not has been refined with a number of machine learning techniques. Researchers hope that in time and with more sophisticated machine learning systems, they will eventually develop a model that can correctly identify disinformation every time it is present in scanned news content.
While much has been made of the risks to cybersecurity posed by AI, the technology is now proving to be of great value in reducing the spread of disinformation and the threat to safety associated with it. As machine learning algorithms perform so well in detecting fake news and general deceptive behavior, AI is a highly useful tool for monitoring for false information across the internet, from the depths of the dark web to easily accessible social media platforms.