NLP

The historical link between neuroscience and computing: It’s future potential in AI research

This article was co-authored by Ibrahim Mukherjee and Javed Khattak.

This article looks at the historical link between neuroscience and the development of computing and artificial intelligence. The focus is on particular neural mechanisms that are used in computing today and how they were developed to their current state.

The son of a cobbler and laws of thought

On 24 November 1864, as George Boole walked 3 miles in the pouring rain to give a lecture at Queen’s College, Cork in Ireland he developed Pneumonia which would tragically end his short life. The son of a struggling cobbler George Boole was born in Lincoln, London. He had largely taught himself mathematics and his interest in psychology led to a seminal book titled “An Investigation of the Laws of Thought” in 1854.

In this book and his previous work on the topic, Boole wanted to formalise the laws of discourse or argument as exemplified by Aristotelian Logic. He writes “In every discourse, whether of the mind conversing with its own thoughts, or of the individual in his intercourse with others, there is an assumed or expressed limit within which the subjects of its operation are confined”.

A picture containing wall, indoor, person, wedding

Description automatically generated

Shannon and circuits – the birth of information theory

Many years later, Boole’s work would be picked up by Claude Shannon who would write his master’s thesis on finding ways to encode this logic in circuits and relays and transmitting it leading to the birth of modern information theory. In this way of thinking, decisions are encoded as “Yes/No” or “1/0” in electrical circuits. This is used in designing circuits, computer motherboards, programming languages. The famous computer scientist and CEO of Google, Larry Page, once said “his method for solving complex problems was by reducing them to binaries, then simply choosing the best option”.

All of this goes back to Mary and George Boole’s curiosity about psychology, how people think, and the formalities of public discourse.

A picture containing text, electronics, circuit

Description automatically generated

Turing, the mind, and Turing machines

In 1936, Alan Turing wrote a paper called “ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ENTSCHEIDUNGSPROBLEM”. In this paper, Turing laid out a mathematical way to solve problems in the domain of computable numbers. He writes in the paper “We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions q1: q2. …. qI;”. These are called “Turing machines” and effectively describe Alan Turing’s idea of what the mind goes through when it computes.

This would become the basis of modern computer science. However, the point to note here is that Turing is trying to mathematically model the human mind. Turing’s fascination with the human mind culminated in his idea of seeing machines being able to simulate reality so closely we can’t identify them as machines – the “Turing Test.

Von Neumann and abstracting the logical organ

John von Neumann, 11 years Turing’s senior, laid out the architecture for a computer by similarly representing the brain as an organ into the different parts of the computer. He specifies “an input “organ” (analogous to sensory neurons), a memory, an arithmetical and a logical “organ” (analogous to associative neurons), and an output “organ” (analogous to motor neurons).”

The architecture von Neumann thus developed is still used as the foundational structure of a computer and is known as the “von Neumann architecture”.

Diagram, engineering drawing

Description automatically generated

A reverend and bayesian babies

Thomas Bayes was a Presbyterian Minister and in his quest to prove the plausibility of miracles, came up with what is now famously known as the Bayes’ formula. In this formula, Bayes posits that humans go through a learning process whereby any new view or hypothesis we hold is based on past experience + new data coming into our mind thus:- Initial Belief + New Data -> Improved Belief.

As it turns out Bayes’ formula was very accurate in capturing how humans hold and keep beliefs.

Alison Gopnik and Laura Schultz have shown a remarkable similarity between how babies learn and the Bayes Formula. Josh Tenenbaum has created state of the art AI Models based on Bayes’ theorem. Bayes’ theorem is even used in state-of-the-art self-driving cars.

This is yet another place we see modelling the human mind has been instrumental in the progress of AI. Some scientists have even suggested that the process of evolution itself seems to mirror a “Bayesian” process . While some may disagree, it is plausible to venture that Darwin may have juxtaposed the mechanics of his own mind with the natural world having little mathematical training.

A baby lying on a blanket

Description automatically generated

Al Khawarizmi and algorithms of the mind

The word Algorithm comes from the name of the Persian Mathematician Al Khawarizmi who was a member of the House of Wisdom in Baghdad and a founding father of both Algebra and Algorithms.

Some scientists have suggested that our brains have a basic algorithm that enables intelligence. There has even been an equation suggested for it. It is fairly plausible since everything from sounds to showing pictures and all of the software on a computer has an algorithm behind it. It is sufficient to say there would be no modern computing without algorithms.

Neural networks

Neural Networks are currently the state of art in Artificial Intelligence and have been used to solve everything from the protein folding problem to beating the best go players in the world.

A neural network is an effort to simulate the neurons in a biological brain and has been wildly successful in many domains from computer vision to solve problems that would otherwise be impossible through traditional computing methods. Neural networks create “artificial neurons” in layers and use techniques like backpropagation to get the final result.

The future of computing and AI

There are some very exciting developments in AI to look forward to in the next decade with Boston Dynamics working on robots, self driving cars and the further development of neural networks.  

Companies like Wellness Assist meanwhile are building technology to help employees and knowledge workers optimise their productivity and wellbeing in the workplace and pioneering mindful computing. There are also efforts to make more “biological” computers.  

Whatever the future holds, the brain will be central to developments in computing and AI in the next decade.

Author

  • Ibrahim Mukherjee

    Ibrahim Mukherjee is a seasoned technology leader with over 14 years of experience in developing and implementing innovative AI and business solutions. He holds a BSc in Management from the LSE and is pursuing a second bachelor’s in AI from the University of Applied Sciences Berlin after transferring from Computer Science, UoPeople. Ibrahim has worked for some of the top companies in the world including BG Group, DSM and British Airways. He is working on his first book after a 5 book publishing deal. He is additionally pursuing MSc and PhD opportunities as well as multiple start-ups. Contact here for job and speaking opportunities :- www.ibrahim-cv.carrd.co and here for consulting contracts :- TheGeneralAICo.

Related Articles

Back to top button