
The need for responsible artificial intelligence (AI) solutions built with care at their core has never been more salient. New research from the AI Security Institute finds that a third of UK citizens have used AI for emotional support, companionship or social interaction, showing an increasing and collective reliance on this type of technology.
While AI should never be a replacement for real and meaningful human interaction, it can greatly enrich the lives of people when used correctly, offering transformative effects for some of the most vulnerable in society.
The evolution of dementia tech
Take the 55 million people currently living with dementia globally as an example. Decades of research into the science of dementia is beginning to bear fruit, most recently, the development of drugs found to slow the progression of early-stage Alzheimer’s disease. The path we’re on provides hope for the future, but we must not forget the urgent support needs of those living with dementia now.
Alongside symptom treatments and person-centred care, this is where intelligent assistive technologies are set to play a more prominent role in dementia care.
Thanks to the rapidly evolving technological landscape, we’re now venturing far beyond basic dementia tech like emergency alarms or neck pendants. While earlier technologies provided valuable support, too often they were designed to provide peace of mind to caregivers rather than to be used by the person living with dementia.
AI and machine learning is moving the dial to better serve those living with dementia directly, helping people to live independently in their own homes for longer, enriching their lives, as well as providing the reassurance that families and caregivers seek.
Emerging AI solutions
CrossSense is one example of an assistive AI technology being developed by a co-operative in London. The AI companion runs on off-the-shelf smart glasses to guide someone through their daily routine. That could be helping them to safely make a cup of tea, reminding them where they have left their keys, or suggesting whether or not to water the plants.
It can pick up on elements of conversations and share prompts at later times – for example, if the user mentions that they might need a haircut, a day or so later CrossSense will check in and remind them that they wanted to make an appointment at the hairdresser.
Quite different, but equally as impressive, is SenS2 being developed by Cambridge-based Supersense Technologies. It is a discreet device that looks like a Wi-Fi unit. Sitting quietly in a corner, it uses radar and AI to learn household routines by recognising patterns of movement – not people or activities. It doesn’t monitor individual behaviour; it simply learns and understands what “normal” routine looks like.
When routines are as expected, family and caregivers receive reassuring WhatsApp messages such as “Mum’s awake and the heating is on.” If something changes, they’re alerted – for example, “Mum was up six times in the night.”
The result is independence with dignity. People living with dementia can stay at home without feeling watched, while families have confidence they’ll know when support is needed – without feeling intrusive or overbearing.
Both are finalists in the Longitude Prize on Dementia, a global prize rewarding the development of assistive technologies for and with people living with dementia, alongside teams from Austin TX, Sydney AU and Porto PT. The £1 million grand prize is set to be awarded in London in March 2026.
Responsible innovation remains key
Yet AI is not without its flaws. Bias for example, is a well-documented problem. As a machine it has no moral compass, can be sycophantic, and reinforce a person’s point of view even if that may put them at risk of harm.
As a nascent technology, creating AI solutions for people living with dementia – or any vulnerable group – allows no margin for error. Flaws must be mitigated to ensure that technologies are safe, reliable and genuinely contribute to care. But what does responsible innovation look like in practice?
AUTONOMOUS, developed by Porto-based Associação Fraunhofer Portugal Research, is using Large Visual Models to analyse footage from cameras and microphones positioned in homes. It means it can recognise when a fridge is left open or a tap is running and send a friendly reminder to a smartwatch worn by the homeowner to take action.
However, filming and recording a person in their own home could be potentially intrusive and presents privacy and data security risks. To meet this challenge, the team has developed its system to run on a small computer in the home, rather than in the cloud, so that the personal data never leaves the house.
Making it a moral duty
It’s difficult not to be impressed by the AI technologies being developed to help people living with dementia remain independent in their homes for longer – with many coming to market in the next 12 months.
There is a moral duty to build responsible AI products that put the emotional and physical wellbeing of users above all else, while respecting privacy and dignity. As this technology begins to play an increasingly prominent role in all our lives, for the most vulnerable people in our society who could benefit greatly from assistive technologies, “responsible AI” takes on a far deeper meaning.


