Future of AI

The importance of improving public understanding of AI

In 1933, the Chicago World’s Fair gave visitors a glimpse into the future. The exhibit showcased the latest innovations in rail, automobiles and architecture, alongside a model house equipped with microwaves, anthropomorphic robots and a small hangar for the family plane. For years, people have wondered how technology will change the world around us, both positively and negatively. As many futuristic technologies become a reality, how can we increase public understanding to ensure we benefit from developments in artificial intelligence? Here Charlie Rapple, founder of research communication specialist Kudos, explores.

Predictions about future technologies have ranged from the home of the future at the World’s Fair, to apocalyptic films about artificial intelligence (AI) and robots taking over the world. These latter depictions are fictional, but as AI becomes more widespread, differing depictions of the technology means that the general public are asking more questions about how it works, how it is being applied, and whether it can be trusted.

The age of AI

We are truly experiencing the age of AI, where a host of new technologies are sweeping across almost every industry. Intelligent sensors are being used for applications as broad as monitoring patient symptoms in healthcare settings to predicting machinery lifespan in manufacturing facilities. AI has also begun to change everyday life, with the introduction of self-driving vehicles, voice activated assistants and more.

The pandemic was a catalyst for demand for analytics and AI technology — both for healthcare and business. A 2021 study by PwC found that 52 per cent of companies surveyed accelerated AI adoption plans because of COVID and 86 per cent said AI was becoming a mainstream technology at their company. By using AI, businesses could better react to shifts in working arrangements, purchasing behaviour, skills shortages and more. Similarly in healthcare, an AI imaging database helped improve the diagnosis of patients presenting with COVID symptoms.

The AI debate

When a technology is initially introduced to the market there is often little information about it available, particularly for public viewership. As a result, people may go looking for the information themselves which, if not credible, can lead to misconceptions about the technology, which can spread online.

This is particularly true for AI, of which people have differing opinions. Professor Stephen Hawking, for example, stated that AI could “spell the end of the human race”, while Google CEO, Sundar Pichai said that AI will cause a “more profound change than fire or electricity.” Both are renowned and respected figures, so which should the public trust?

How AI will impact the workforce is another common debate. According to Pew Research, 52 per cent of experts surveyed expected AI and robots to create more jobs, while the others said it would displace blue- and white-collar workers. Many people also question the ethical implications of AI, particularly when it is used for decision-making applications, such as accident prevention in driverless cars, court proceedings and smart assistants.

Trustworthy sources

One of the challenges we face is educating the general public about the true implications of AI, which can only be done using credible sources. Many of the questions people have about AI are already being answered by researchers in the field, but academic research is not straightforward for the general public to access. At best, we can advise that, when digesting any information or opinions about AI, readers check the provenance of the information and look for trustworthy sources, such as university researchers, technology journals, industry experts or educational books.

With so much debate in the industry, this only goes so far. The public will need to cross reference any claims with a variety of sources to ensure they only form opinions based on credible information, discounting information from unreliable sources that make claims without researched evidence. It may even require them to refer to academic literature to see where claims have initiated from.

A challenge here is that academic publications are written in language that only specialist audiences can understand. This can lead to the public missing valuable information that could help them make more informed decisions and back up or disprove other sources of content. The challenges of reading academic articles is an important reason that people turn to social media as a more digestible ─ but less credible ─ source of news.

Finding the facts

It’s time we bridge the gap between science and the public. Currently, the public will turn to unreliable sources because it is difficult to find information at the source, and even harder to understand it once they have found the research.

What we need is clear. An accessible platform dedicated to research on important topics in AI. It should include summaries of the latest research on everything from machine consciousness, to AI in the factory of the future and how AI will impact our everyday lives. Improving access is not enough, these parties must also be able to understand the research, its impact and what they should do with the information. For example, summarising each piece of research into a short explanation in easy-to-understand language, to ensure that anyone can quickly find answers to their questions.

As the use of AI becomes widespread, we must ensure that the public find credible information on its implications, rather than making presumptions based on futuristic exhibitions or dystopian film portrayals. Giving the public access to credible and accurate information increases their understanding of the capabilities of AI and its potential in the future, encouraging them to accept this new innovation, rather than resent it.

Read more research about AI on our simplified research platform on https://www.growkudos.com/showcase/collections/artificial-intelligence.

Related Articles

Back to top button