Future of AI

Disinformation vs misinformation – where AI fits in, and where it doesn’t

“Fake news” has become a firm part of our daily lives, influencing everything from our conversations with each other to our elections. With a recent study showing that more than 80% of people in the UK regularly come across fake news – and one in ten people never check the reliability of online content – it’s a problem that doesn’t seem to be going anywhere anytime soon.

People often use the terms ‘disinformation’ and ‘misinformation’ interchangeably when referring to fake news, and many aren’t aware there is a difference between them. While disinformation relates to spreading false information intentionally to deceive people, misinformation is defined as inaccurate or incorrect information that is not deliberately deceptive. 

For example, when Russia creates a deepfake video of Volodymyr Zelensky telling Ukrainian troops to surrender, its deliberate purpose is to deceive, harm or sow chaos and it is disinformation. But when your uncle reshares a controversial, but out of context, soundbite or shares a factually inaccurate article written by chatGPT that he actually believes is true…it’s misinformation. And this is where it gets really tricky. 

While disinformation is a big problem, and one that can, to an extent, be combated with artificial intelligence (AI), misinformation is intractable and trying to solve it with AI will not only fail, it will potentially create bigger societal problems; impinging on free speech, personal liberties, and even the advance of science. 

Despite this, there are many companies out there attempting to detect misinformation, from Logically – an AI startup that monitors more than a million web domains to assess the veracity of information – to Full Fact,  a company building AI tools to help fact-check information. It is also an area that the UK Government is monitoring, most recently spending £75,000 on a social listening tool to root out and rebut misinformation published online” and tap into conversations from all of the major social media sites and online forums.

While the Government’s attempt to encourage a more educated electorate is laudable, the problem comes into sharper focus when we understand what constitutes misinformation, and – more importantly – who decides. This is where things can go off the rails really quickly. In the 15th century, it was the Spanish Inquisition. Today, it’s a lot less clear. 

At best, AI tools can detect some of the most low-hanging disinformation fruit (the aforementioned deepfake videos, for example), but even this becomes hard when we consider the incredible speed with which news is now disseminated, shared, and repurposed. Significant portions of a population can now be swayed with effective communication, true or untrue, in a matter of days, sometimes even hours. 

There’s no denying that misinformation is a problem that needs to be addressed – but trying to solve it with AI could evolve into detecting ‘truths I don’t like’ and censoring things that could cause more damage, from stifling people’s freedom of expression to stalling scientific discovery. 

Both Copernicus and Galileo are good examples of this. They challenged the status quo’s belief in an Earth-centered universe and found themselves subject to censorship, house arrest and their books being banned. In today’s milieu, their breakthroughs would surely be considered misinformation and the silencing of their “fake news” would have prevented us from coming to a greater understanding of our place in the solar system. The irony of it, and the reason why AI will never be able to solve the problem of misinformation, is that truth, even in science, is often quite liquid, emergent, and dependent on dominant paradigms of understanding from a specific time and place.  Indeed, the heliocentric view they promoted located the Sun at the center of the Universe…and that’s not true either. 

What we need is not more censorship, but more free speech.  The problem today isn’t that misinformation exists or even is shared; the problem is that algorithms are being used to propagate and promote divisive or inflammatory information for profit.  

Bad ideas are nothing new, but we have always been better off when those bad ideas could be seen, discussed, and debated in order to understand which ones to reject and which ones to foster. The difference now is that in the past, we only had to worry about which ideas were worthy and which were merely tenacious.  Today, we have to worry about the ease with which AI algorithms on social media and elsewhere can instantly boost the volume of bad ideas at scale. 

Left unchecked, this can have a deleterious effect on our minds’ ability to establish reasonable prior knowledge about the world we live in. Without well-informed “priors”, we risk becoming more susceptible to manipulation through the algorithmic spreading of bad ideas, and we won’t be able to understand which ideas are truly worthy. 

We must be vigilant against this, but using AI for censorship is not the solution. The solution is a million minds of filtration, fact and sanity checking, discourse, dissent, persuasion, agreement, compassion, disagreement, hate, arguments, hope, and the love that you would get from all the people processing and sharing those ideas themselves.

Author

  • Brian Mullins

    Originally from California, Brian Mullins has been the CEO of Mind Foundry since June 2019. He is an entrepreneur and technical leader with over a decade of experience in high-growth technology companies. Brian has been at the forefront of technology and studies the potential of technology to positively change the lives of people around the world. He has been awarded over 100 U.S. patents, has been asked to testify as an expert before the U.S. Senate, received an Edison award for Industrial Design, was named one of the CNBC Disruptor 50, and was named one of Goldman Sachs “100 Most Intriguing Entrepreneurs."

Related Articles

Back to top button