Conversational AI

Why AI chatbots fail when it comes to social norms

Artificial intelligence (AI) has taken over our lives, quite literally. Technology impacts everything from how we access information to how we communicate and interact with people, even to a point where we don’t realize we’re using it as it has become so commonplace.

One of AI assistant‘s most common uses is in the form of chatbots. Essentially, a chatbot is a program within a website or app that uses machine learning and natural language processing to interpret inputs and understand the intent behind a request. They can be rule-based with simple use cases or more advanced and handle multiple conversations at the same time.

We’ve all heard about ChatGPT and its impressive capabilities. It can be used to answer a multitude of questions and help with tasks such as writing emails, essays, and code. Following hot on its heels, now there are a host of other chatbots out there or in development, the capabilities for which are almost limitless as the technology continues to advance.

Conversational chatbots

Set out in their article AI Chatbots Don’t Care About Your Social Norms, authors Jacob Browning and Yann LeCun provide a detailed analysis of the current state of AI conversational chatbots, their limitations, and what this means in terms of human interaction. The insights that they give are vital in helping us to better understand AI and its inherent restrictions.

The article’s opening argument is that AI chatbots struggle to understand and conform to social norms and contexts. This is despite all their advanced programming and technology capabilities. 

Inappropriate or objectionable responses

The problem is that while AI chatbots are highly effective at providing statistically probable responses, they frequently fail to understand the complexity of humans and language and the subtleties of conversations. While they are simply trained to generate words based on a given input, they can’t truly comprehend the meaning behind those words. This results in them generating shallow responses that lack depth or insight and producing inappropriate or even objectionable outputs. 

As Browning and LeCun correctly point out, these inherent flaws underline a key differentiation between human conversation and AI-generated responses. This has led to an inevitable mistrust of AI chatbots and their output among users.

The article also emphasizes the importance of social norms and conformity in human communication. People adapt to and follow a complex series of social guidelines and expectations. Following this set convention, it makes interactions simpler as well as those communicating more predictable.

But, in their current form, AI chatbots are unable to understand or replicate how these social norms are followed. Also, they can’t pick up on meanings and turns of phrases such as idioms, metaphors, rhetorical questions, and sarcasm. Because they can’t comprehend the social and moral implications of their words, they may end up violating these protocols, or worse, produce offensive responses.

Failure to learn

The problem with AI chatbots is not that they are “black boxes” or that their technology is unfamiliar, but rather that they have a history of being unreliable and offensive, and are unable to learn or progress in that respect. Given the sheer complexity and variability of human language and our social norms, AI’s reactive approach will always be one step behind the required response. That’s even if programmers continuously try to improve their systems.

Despite all this, Browning and LeCun argue that these glaring deficiencies shouldn’t negate AI chatbots’ capabilities. Yet, while their ability to discuss a wide range of topics is impressive, it also indicates a shallow comprehension of human social life. 

The fact that they can’t demonstrate empathy, honesty, responsibility, and self-awareness prevents them from socially interacting in a trustworthy manner. Therefore, this limits their use and, also, possibly increases their potential for harm.

Inability to understand and adhere to social norms

Browning and LeCun paint a clear picture of the limitations of AI chatbots as far as social norms and human interaction are concerned. While AI chatbots can mimic human conversation to an extent, they argue that their inability to understand and meet social standards is what precludes them from providing the full human experience. The bottom line is that they aren’t intelligent entities capable of genuine human conversation.

The article’s sharp analysis highlights the need for a better understanding of AI chatbots’ capabilities and restrictions. It also raises genuine concerns about the ethical implications of AI’s development, the future of AI and human interaction, and the role AI should play in our society, both now and in the future. 

As we continue to innovate and integrate AI more deeply into our daily lives, such closer examination will be required to ensure that technology improves human interactions, instead of diminishing or distorting them.

Author

Related Articles

Back to top button