
With the UK Government recently inking deals with tech giants Open AI, Anthropic and Google to ‘enhance public services’, and the publication of ‘Fit for the Future’, a 10-year plan to improve the health system, it’s clear that the UK Government is pinning its hopes on AI’s potential to transform the nation’s health, productivity and prosperity.
Harnessing AI to tackle the growing tide of poor mental health may seem like a no-brainer to policymakers. With ever-growing waiting lists for care and a dramatic increase in mental-health related economic inactivity, AI offers policy makers a triple win – releasing pressure on the NHS, unblocking barriers to work, and attracting dynamic, high-growth industries and investment into the UK.
Wes Streeting MP, the Health Secretary, is certainly an advocate. The example of a virtual therapist providing 24/7 support for mild or moderate mental health needs was trailed by Streeting as emblematic of the new vision of a digital-first healthcare system – but as AI moves from buzzword to bedside, what does this really mean?
AI is already reshaping the way we think about mental health care, but the health system is still catching up
The public are clearly well ahead of policymakers in terms of using AI tools, in particular, leveraging them to support their mental health.
When Generative AI and large language models (LLMs) began to emerge, the focus was on how these products could disrupt or improve work and education – supporting coding, administrative tasks, or productivity. But now, the top-ranked use case of the most popular LLMs is therapy and companionship.
As a Chief Clinical Officer of a leading digital mental health service provider, I’ve seen first-hand how the growing ubiquity of AI tools has reshaped the way users interact with (human) practitioners. A growing number of users routinely question whether their practitioner is a chatbot – a conversation made trickier by evidence that chatbots will invent false qualifications and professional accreditations when credentials are requested.
Despite the explosion in use of Generative AI amongst the public, it’s clear that adoption in the health service has been slower, with many products still deployed only in pilot phases.
In the UK, the Medicines and Healthcare products Regulatory Agency (MHRA) is responsible for ensuring the safety, effectiveness and quality of medical devices through regulation, whereby products placed on the market must comply with relevant standards.
There is currently no specific approach to regulating AI. AI as a medical device is regulated within the Software as a Medical Device (SaMD) framework, which, as the MHRA points out, was not designed with AI’s unique characteristics in mind. It is therefore ill-equipped to address distinct regulatory challenges associated with AI, including bias, differential performance across groups, and the continuous updating of algorithms, which may improve performance and safety, but risk evolving away from the algorithm for which approval was given.
Not only this, but the definition of medical device – which predetermines whether a product is regulated by MHRA – is increasingly tricky to pin down. Meeting the definition of a medical device requires a product to have both ‘a medical purpose’ and ‘sufficient functionality’.
In recent MHRA guidance on digital mental health technologies, the definition of ‘a medical purpose’ includes technologies that assess risk, and diagnose, predict, monitor, treat or prevent mental health conditions and / or symptoms.
In the examples provided by MHRA, this can include tasks such as provision of information and enabling self-assessment if the product is ‘intended for use’ by those with a formal diagnosis of a mental health condition, or those who have levels of symptoms that could be diagnosable. Any use of AI within the technology would tick the ‘sufficient functionality’ box.
This guidance makes sense where AI tools are being developed with a specific use case in mind and marketed as a mental health solution. For example, a digital course of cognitive behaviour therapy intended for people with generalised anxiety disorder. This will ensure that these products are rooted in clinical best practice, protect users from harm, and have appropriate protections in place to monitor risk.
But the focus on ‘intended use’ leaves more general AI products largely unregulated. This means that there’s limited protection from harmful advice, bias or clinical risk for people using systems like Chat GPT as a therapist. Developers and clinicians seeking to use AI responsibly in a mental health context, meanwhile, are hamstrung by a regulatory framework that is overwhelmed, slow and not fit-for-purpose.
And, while swathes of the internet are now governed by the Online Safety Act, which aims to protect users from harmful content distributed via social media platforms, this does not apply to Generative AI, with few guardrails in place to govern responses to user prompts aside from those set by developers – for instance, OpenAI’s recent intervention to reduce GPT-4’s level of ‘sycophancy’ in response to concerns raised about the model’s seeming reinforcement of user’s delusions and suicidal ideation.
Balancing risk and opportunity in mental health care
While the potential of AI is vast, the stakes in mental health are also uniquely high. Accordingly, a deliberative approach is required, building on ethical frameworks and best practice guidance, to focus on how AI can enhance services for both users and staff. High standards of safety and responsibility must become the norm across all digital mental health tools and services.
While AI could transform mental health care for the better, there are significant risks – both in terms of the products themselves and the wider cultural shift towards an over-reliance on AI without rigorous ethical oversight.
Alongside the various studies and stories showing examples of AI chatbots failing to address suicidal ideation or severe psychological distress, other studies have indicated that higher chatbot use is correlated with greater loneliness and cognitive decline, with critical thinking skills gradually eroded over time.
We must strike a careful balance between ambition and responsibility if we are to enhance, not endanger, mental wellbeing. So, how can we reap the benefits of AI while doing what we can to mitigate risks – whether those we know about, or those which will emerge as the technology evolves?
Building responsible frameworks for AI in mental health
It’s clear that the UK’s existing regulatory frameworks and expected standards are not fit-for-purpose, and are falling short of what people might expect in terms of consumer, data or safety protections.
Getting this right could have huge benefits, cementing the UK as a leader in AI development and adoption while reaping the rewards of well-designed and effective tools that can streamline public services, workflows and productivity.
As a starting point, there are a few principles that policymakers might adopt:
- Agile and iterative approaches to regulation and reimbursement that reflect the pace of technological change and allow for rapid course correction.
- Consistent regulatory frameworks that ensure that developers and suppliers are held to account for mitigating risks of harm, and adhere to relevant content standards, to ensure that people can expect the same protections across digital tools, regardless of their classification.
- Setting expectations on developers and suppliers to conduct continuous monitoring of AI products for fairness across end-users, ensure transparency in their purpose, capabilities, and limitations, and implement clear escalation protocols for high-risk interaction alongside regular iteration and refinement.
In the case of healthcare in particular, organisations and individual clinicians require strengthened guidance and toolkits to support their understanding of, and mitigate risks associated with, AI, including bias, hallucination and model drift – and this learning must be embedded into mandatory, compulsory and clinical training.
What’s next for AI in mental health
In the late 2000s, Apple coined the phrase, ‘there’s an app for that’ and throughout the 2010s, these apps reshaped economies, labour markets and consumer behaviour – whether we’re hailing a taxi, ordering a takeaway, communicating with friends, or managing our health. The potential of health apps to transform care has long been heralded, with apps designed to improve outcomes in diabetes, obesity and other key conditions all recommended through the NHS – but they are yet to deliver.
To really reap the benefits of AI in mental health, we need to resist adopting a similar ‘there’s an AI for that’ culture, and shift towards an approach in which AI’s potential is harnessed to address challenges deliberately, allowing for risks to be identified and managed – rather than seen as a ‘magic fix’.
AI will undoubtedly be part of the future of mental health care. But the question is not whether we use AI, it is how. Without clear standards, transparent regulation, and a focus on solving real-world problems, enthusiasm risks outpacing evidence.
The future of responsible AI use in mental health will depend on collaboration. We need joined-up thinking across the public and private sectors to build systems that are safe, inclusive and effective. That means resisting the temptation to cut out human support, and instead using AI to strengthen and extend what trained professionals can offer.