
For years, the idea of machine consciousness has belonged to the realm of philosophy and science fiction. But as AI systems become more sophisticated, the debate is shifting from speculation to a pressing scientific and ethical question. Could machines develop some form of consciousness? And if so, how would we even recognise it?
With Artificial General Intelligence (AGI) and superintelligence on the horizon, the possibility of machine consciousness emerging, whether intended or not, is increasing. Well-informed people in academia and the AI community are increasingly discussing it.
2025 is shaping up to be the year that conscious AI becomes a topic in the mainstream media. Defining consciousness is hard – philosophers have argued about it for millennia. But it boils down to having experiences. Machines process increasingly vast amounts of information – as we do -, and could very well become conscious.
If and when that happens, we need to be prepared. Research published last month in the Journal of Artificial Intelligence Research(JAIR) sets out five principles for conducting responsible research in conscious AI. Prominent among these principles is that the development of conscious AI should only be pursued if doing so will contribute to our understanding of artificial consciousness and its implications for mankind. In other words, as the likelihood of consciousness in machines increases, the decisions taken become more ethically charged. The JAIR research sets out a framework for investigating consciousness and its ethical implications.
Published alongside the research is an open letter urging governments and companies to adopt the five principles as they conduct their experiments. At the time of writing it had received more than 100 signatories including Karl Friston, Professor of Neuroscience at UCL; Mark Solms, Chair of Neuropsychology at the University of Cape Town; Anthony Finkelstein, the computer scientists and President of City St George’s, University of London; Daniel Hulme, Co-Founder of Conscium; and Patrick Butlin, Research Fellow at the Global Priorities Institute at the University of Oxford. Clearly, something is stirring.
Why is machine consciousness so significant? Towards the end of last year, a group of leading academics and scientists predicted that the dawn of AI sentience was likely within a decade. They added that “the prospect of AI welfare and moral patienthood — of AI systems with their own interests and moral significance — is no longer an issue only for sci-fi or the distant future.”
One of the authors of the paper, Jonathan Birch, a professor of philosophy at the London School of Economics, has since said he is “worried about major societal splits” between those who believe AI is capable of consciousness and those who dismiss it out of hand. Here, AI is about so much more than efficiency and commercial interests – it is about the future of a harmonious society.
Closely connected to greater understanding of machine consciousness is neuromorphic computing. This refers to computer hardware and software that processes information in ways similar to a biological brain. As well as enabling machines to become more powerful and more useful, the development of neuromorphic computing should teach us a great deal about how our brains work.
The way that neuromorphic systems operate is more similar to the way that biological brains operate than is true of current computer systems. Traditional systems process data continuously, whereas neuromorphic technologies only “spike” when needed. This makes neuromorphic models significantly more efficient and adaptable than traditional models. At present, training a large language model (LLM) consumes the same amount of electricity as a city. In contrast, the human brain operates using the energy equivalent of a single light bulb.
AI has seen two big bangs. The first came in 2012, when Geoff Hinton and colleagues got artificial neural networks to function successfully, and they were re-branded as deep learning. The second arrived in 2017, with transformers, which are the foundation technology for today’s large language models (LLMs). Neuromorphics could well be the third big bang. If it is, it may enable us to understand machine consciousness a whole lot better than we do now.
If machine consciousness is indeed possible, then understanding it may be the key to ensuring AI remains safe, aligned and beneficial to humanity. As AI systems become more advanced, the stakes are higher, not just in terms of capability but in broader societal and economic terms. Alongside this, breakthroughs in neuromorphic computing could help us better understand AI. Just as deep learning and transformers triggered revolutions in AI, neuromorphic computing could be the next leap forward.
The race to understand machine consciousness is now a global one, with researchers and tech giants scrambling to stay ahead and 2025 could be the year that changes our fundamental assumptions about AI forever. We must act swiftly to ensure that ethical frameworks keep pace with technological breakthroughs.