Data

Artificial Intelligence vs. Natural Stupidity

The year was 1971 when artificial intelligence (AI) was still in its early stages. The big IBM 360 mainframe at The Technion, Israel’s Institute of technology, was down for 3 days: Its internal memory was being upgraded from 1MB to 2MB!

I was 16 years old, my freshman year in the Department of Computer Science. To hide my very young appearance, I grew hair down to my shoulders and smoked a tobacco pipe during classes (it was allowed!). We were all fascinated with Eliza, Joseph Weizenbaum’s prototype of a speaking computer program, which was just getting worldwide attention.

One of the very first courses in this freshman year was “Finite Automata”. It was one of my favorite courses (probably because it didn’t need calculus). I was particularly fond of Turing Machines.

I was amazed by the simplicity of the model on one hand, and by the fact, it can simulate any algorithm. It also drew my attention to Alan Turing himself, The man behind the machine, and more importantly behind the Turing TEST.

Alan Turing had taken his own life in tragic circumstances 17 years earlier. Not long before, he published a landmark article called “Can Machines Think?”.

I read this article 49 years ago, and I am under its influence until this day.

In a nutshell, it addressed the question of “Intelligent Machines”, that could “really think”. In this article, written 4 years before his death, Turing identified rationality, or “the ability to think” or “intelligence”, with the mere, simple capacity to hold a human-like conversation.

He devised a thought experiment he called “The Imitation Game”, including two speakers (of a natural language), one trying to fool the other regarding some key aspect, like his gender.

The same kind of game can be played between a human and a computer program, with the program “trying to fool” the human into believing he is conversing with a fellow human. This scenario is known as the famous Turing Test, a test many Chatbot developers have tried to pass over the years, with varying degrees of failure.

From then on, the term “Artificial Intelligence” stayed with me as the holy grail of computer science. A computer that can behave like a person! As an avid Science Fiction fan, I knew very well that “behaving like a person” does not necessarily require a body.

When we typically engage with another human and assess his behavior, it is the lingual behavior that counts. “Artificial Intelligence” is a software program (even without any physical features) that can speak just like a human.

Years went by.

I spent the next 3 decades at the heart of the technological revolution. I developed applications, Enterprise systems, software development tools, and throughout all this time the term “AI” rarely came up.

It was often used to describe technological innovations that are in the process of R&D but don’t quite work YET.

We used to joke and say “If it works it’s not AI”… In some cases, people tried to soften the definition and make the challenge more realistic.

The soft definition was “Programs that can learn”. In most cases, that meant learning from the user who uses them.

I remember the first time someone tried to sell this new definition.

My response to that was swift: “So that means Microsoft Clippy, which learns the user’s preferences in using WORD, qualifies as “Artificial Intelligence”?

Soon things got worse. Towards the turn of the Millenium, and even with greater force after the hi-tech bubble burst in 2000, the term AI started to be seriously abused. I will not bore you with the chronological order it all happened, but before you knew it, the clear, simple, understandable term “Artificial Intelligence” was being used to describe the following:

Machine Learning

Expert Systems

Data Mining

Deep Learning

Neural Networks

Heuristic Search

Image Recognition

Speech Recognition

Big Data

NLP/NLU/NLG

Language Translation

Machine Vision

Fuzzy Logic Systems

Cognitive Computing

In fact, the current use (or, rather abuse) of the term “AI” has stripped it from any meaning. All of the above, and much more, can be categorised as “AI”, that it almost doesn’t mean anything anymore.

Meanwhile, people continued trying to make software that could, potentially, take a shot at the good old Turing Tests. These attempts were usually by hobbyists and enthusiasts, who competed for a $1,000 prize at “Loebner’s Turing test Competition”. These abysmal attempts at AI were given a cheap, demeaning name: Chatterbots, or Chatbots. Nothing to do with AI…

Another 2 decades went by (well, almost). In the past two years, I was happy to follow the emerging new buzzword: Conversational AI.

NOW you’re talking!

The domain of “Software systems that imitate humanlike lingual behavior” finally has a proper label, almost as good as the original “AI”. The prefix “Conversational” clarifies the type of intelligence (which, in mine and Turing’s view, is the only kind that really counts).

The task at hand is not to build a system that knows much. I had the pleasure of being acquainted with a dude who referred to himself as “Joe Sixpack”, and was proud of the fact he was thrown out of school in 3rd grade.

This guy was clueless in almost every conceivable topic, but there was absolutely no question that he was a rationally English speaking adult, although he knew practically nothing about anything.

This Joe became my perfect example of what a good Chatbot need NOT possess: Much knowledge; information; manners; not even acceptable values. Just the ability to hold the thread of the dialog, and understand only what it takes, in order to produce a response that indicates such understanding.

The real holy grail of (Conversational) AI is this: To effectively mimic Natural Stupidity, as it manifests itself in humans like Joe Sixpack. Once we get there, giving him a college education would be a piece of cake.

Try CoCo, our Ai-powered chatbot here!

Related Articles

Back to top button