Philosophers, neuroscientists, and AI researchers are actively rethinking where the boundaries of sentience, agency, and individuality in artificial intelligence may lie. In an era where machines can simulate complex reasoning, adapt to new situations, and even mimic human emotional cues, the question is no longer whether they could resemble conscious beings, but what that resemblance looks like.Ā Ā
Recent discussions between Dmitry Volkov, Joscha Bach, Murray Shanahan, and Matthew MacDougall have revealed that the current definitions of consciousness may no longer suffice. Consciousness, they argue, may need to be reframed in a way that can account for both biological and synthetic minds. This perspective was explored at a recent AI and Sentience conference hosted by the International Center for Consciousness Studies (ICCS) in Greece.Ā
Intent, Agency and the Nature of ConsciousnessĀ Ā Ā
Dmitry Volkov, co-founder of the ICCS, founder of Social Discovery Group and EVA AI, philosopher, has suggested that machines can, in a meaningful sense, try. For example, Deep Blueās gradual chess mastery over Garry Kasparov saw each match built on strategic adjustments, suggesting an apparent pursuit of victory. This victory marked a significant milestone in the field of artificial intelligence and computer science, demonstrating the growing capabilities of machines to compete with and even surpass human intelligence. Murray Shanahan, Professor of Cognitive Robotics at Imperial College London, used Daniel Dennettās intentional stance to explain why humans so readily ascribe agency in such cases ā if a systemās actions appear goal-directed, we instinctively interpret them as purposeful.Ā Ā Ā
The crux of the debate is whether this appearance is enough. Should a system that acts in ways we recognize as purposeful qualify as conscious, or must there be a subjective, first-person dimension? The divide between functional perspectives focused on behaviour and capability, and phenomenological perspectives focused on subjective experience remains one of the most persistent tensions in consciousness research.Ā Ā
Mimicking vs Expressing EmotionĀ Ā Ā
The conversation becomes more complex when emotions are involved. Cognitive scientist Joscha Bach and Matthew MacDougall, Head of Surgery at Neuralink, point out that AI systems can already evoke empathy and trust in contexts such as medicine, where patients interact with AI-assisted surgeons. Volkov pushes this further by asking whether a machine could ever āfall in loveā with a human, a question that forces us to confront how much of emotion consists of performance versus internal state.Ā Ā Ā
If an AIās simulated emotional expressions are convincing enough to influence human relationships, do we need to redefine emotion, or treat such displays as categorically distinct from human feelings? Misinterpreting sophisticated mimicry as genuine experience could create ethical hazards such as misplaced trust, emotional dependency, and manipulation.Ā Ā
Similar concerns have been raised by Susan Schneider, who has warned against conflating intelligent, emotive behaviour with sentience itself.Ā Ā Ā
Sentience Beyond BiologyĀ Ā
Nicholas Humphrey distinguished between cognitive consciousness (self-monitoring and introspective access to information) and phenomenal consciousness (the ļæ¼ qualitative feel of experience). He sees the first as plausible in machines, the second as an unlikely product of silicon. By contrast, Pietro Perconti adopts a functionalist spectrum view, suggesting minimal forms of sentience may already be present in certain AI systems that combine structured perception, feedback, and adaptive behaviour.Ā Ā
The broader debate turns on whether biological substrates are essential, or whether a functional equivalent in non-biological systems could yield consciousness. Some, like David Chalmers, note that while computational correlates of consciousness may exist, there is no clear evidence of a universal āblueprintā that could be applied beyond biology. This leaves open the possibility that consciousness is fundamentally tied to certain physical substrates or that it could, under the right conditions, emerge elsewhere.Ā Ā
AI Individuality and CreativityĀ Ā
Volkov has also argued that machines still fall short of expressing authentic artistic individuality. While humanāAI collaborations can generate striking results, they tend to reflect the statistical patterns in training data rather than the distinctive perspectives born of personal experience. This raises the question of whether individuality is inherently tied to a life history and whether it could ever be engineered.Ā Ā
Fundamental Questions for the FutureĀ Ā
Some argue that attributions of higher cognitive abilities to AI must be grounded in rigorous, human-derived functional models, not just behavioural success. However, others draw a firm line between intelligence and consciousness, urging the development of multi-pronged tests to distinguish the two. What we do know is that the central challenge is shifting from whether AI can be sentient to what kinds of sentience might emerge, and how humans should recognise, interact with, and govern such systems.Ā Ā