The NHS is cracking down on ‘unlicensed’ AI scribes. But at the same time, its locking out well-intentioned AI startups with heavy-handed regulations. If health bosses genuinely want to curb unregistered AI use, they need to streamline the red tape and focus on what really matters: data protection.
Wes Streeting’s 10-year plan for the NHS rightly emphasises preventative, community-based care over hospital visits. But for AI startups, it also offers a potential turning point, a digital-first NHS that embraces innovation.
On the surface, the plan looks promising. The Health Secretary has announced a revamped NHS app to centralise patient records and an initiative to scale up AI and help reduce staff admin burdens (Global Government Forum).
This rollout (if it materialises) is long overdue. Our research, published for Emergency Medical Journal, showed that AI scribes reduce GPs’ documentation time by 60%. In a primary care system already stretched thin, this is critical.
An ambitious plan, undermined by bureaucracy
Yet, I remain cautious. The plan lacks clear detail on how it’ll implement AI services at scale, and if history is any guide, bureaucracy will continue to stall progress. The current framework for deploying AI scribes in GP surgeries and hospitals is outdated, confusing, and fundamentally misaligned with how digital tools operate.
Take the example of medical device registration. NHS England now requires AI scribes to be registered as Class 1 medical devices or higher, the same classification as physical tools such as syringes and bandages. For startups, the process is a legal and logistical minefield, diverting precious early-stage funding into compliance consultants before a single NHS contract can be secured.
This isn’t just frustrating for founders, clinicians are feeling the strain, too. Instead of waiting for a list of NHS-registered scribes to hit the market, they’ve already started using ‘unregistered’ scribes to summarise notes or speed up admin. And I don’t blame them.
There’s little difference between registered and unregistered scribes. Nearly all tools use OpenAI’s speech-to-text models. But while ChatGPT isn’t classified as a medical device, startups that incorporate GPT-4 into clinical AI assistants are immediately required to register their products as medical devices.
It’s a puzzling contradiction. The regulatory red tape built around AI scribes is incentivising clinicians to use ChatGPT instead, which poses greater cybersecurity and data protection risks over ones that have been approved by the NHS.
GPs have just ten minutes to listen to patients, to listen, to take notes, and to write up an action plan for further treatment. And that window is narrowing even further. Meanwhile, there were 330 fewer full-time equivalent GPs in 2024 than the year before, yet appointments rose by four million over the same period (Pulse).
The result? Responsible, UK-based based health AI firms are sidelines and unregulated apps thrive. NHS leaders have warned staff against using these tools due to data privacy concerns (Sky News), but until there’s a viable path for safe, scalable solutions, those warnings will go unheeded.
An outdated patchwork of rules
So, what do regulations actually look like? Frankly, a mess. It’s a patchwork of overlapping guidelines, from medical device classification to GDPR compliance and sandbox trials like the NHS AI airlock. Much like the UK’s constitution, parts of it are unwritten, confusing, and cobbled together over decades.
These layers are intended to protect patients, rightly so, but they are tripping up the very innovators trying to build compliant, effective tools. For my company, it made more sense to scale in the Gulf and open an office in Doha, Riyadh or Dubai than navigate the UK’s fragmented landscape.
If the UK government wants to keep AI health startups at home, Wes Streeting must make regulation smarter. Not harder.
Focus on what matters, data privacy
Let’s be clear, the government won’t be able to stop GPs from using informal AI tools. But it can create a system that encourages safe alternatives.
The focus should shift from outdated device classifications to robust data governance. GDPR already sets strong standards for how organisations collect, store and use data. For AI systems, which operate squarely in the realm of electronic data, this is the most relevant and effective regulatory framework.
Let’s leave device certification to the tools it was designed for. And let’s make sure AI assistants – software tools that interpret and structure data – are held to standards that make sense for their function.
Time to fix the system
Wes Streeting has rightly made AI part of the NHS’s 10-year vision. But without a fit-for-purpose regulatory environment, that vision will falter. Streamlining AI regulation is not about lowering standards, it’s about focusing on the right ones. Because while you can’t stop clinicians from seeking help where they can find it, you can empower them with safer, regulated tools, if only we let them through the door.