Healthcare

Revolution or Evolution? Transformer AI is solving the healthcare puzzle

By Perran Pengelly, CTO of DrDoctor

Public sector productivity in the UK is down and decreasing. A big driver of the decline in efficiency is our beloved NHS – it remains 18.5% less productive than before the COVID lockdown, and this is worsened every year by the strain of winter pressures.

Chancellor Rachel Reeves has said that one of the government’s main aims will be to cut down on wasteful spending in public services but it’s hard to imagine where these cost savings will come from – particularly when trusts are investing in more clinics, clinicians and resources to offer more appointments.

In this context, emerging technologies like AI have the potential to offer valuable solutions. AI has proven benefits for more than just exciting med-tech advancements. It can be deployed for operational efficiency, improving the back-end of NHS infrastructure and helping Trusts to meet the supply-demand challenge. For example, AI can be used to help hospitals predict which patients are most likely to miss an appointment and personalise appointment reminders. In turn, this reduces the no-show appointment rate by up to 30%, allowing clinical and administrative time to go back into seeing more patients.

But under heightened pressure to reduce waste, how do you identify which tools are the most effective?

The key is choosing the technology that is evolving with the industry. The transformer architecture, a form of deep neural networks first published in 2017, powers today’s popular Large Language Models (LLMs). These models are not only capable of understanding and generating textual and visual content but can also be trained to tackle a broader range of challenges, making them versatile tools for addressing complex problems in healthcare and beyond.

When we combine AI agents based on new models optimised for their own problem areas, we get a suite of helpers that allow for increased productivity – clinicians and nurses will be able to care for a greater number of patients, with better outcomes.

Developing trust in AI

Let’s start by understanding AI in its broader context. Across various industries, AI models have traditionally been limited in their capabilities. For example, older AI models excel at tasks like predicting sequences, detecting anomalies, or categorising data. These models are created for a specific task and can use statistical analysis to measure their effectiveness, assessing how often they predict correctly.

LLMs, built using transformer technology, work by predicting the next token in the output. They are probably correct in their guess, but not every time. Proving with statistical methods the correctness of an LLM is hard as the answer depends on the prompt, but in this lies their great power – their ability to handle a greater variety of tasks.

To give a measure of ability for an LLM they are often asked set questions, for example from common exams. This allows us to compare across LLMs to see how they score on a test. For a domain specific model that is deployed in production you may supplement this with an automated test suite around it to check that its answers are correct for the inputs. Combining different sets of tests can build confidence that the model’s answers are relevant, coherent and incorporate the provided context.

The error rate of AI models is looked at with much greater criticism than that of humans. For a business to trust AI, the model needs to be 10x, 100x or even 1000x safer to build necessary trust.  The tolerance for risk is even narrower in healthcare. A trained clinician or nurse will know to challenge an AI model’s generated analysis of diagnostic images in a way other business decision makers wouldn’t. They won’t just accept the AI-driven answer, but it will certainly help them arrive at the right one more quickly.

Starting with one problem at a time

The best place to start when you introduce a new technology to improve efficiencies is with one distinct problem; in the context of healthcare, the communication problem. LLMs can be used to quickly summarise, categorise, transcribe, translate and in some contexts, draw insights that can help people make quick, data-driven decisions.

Rolling out AI in a way that starts with making people’s lives easier, reducing strain on booking teams and improving the patient experience is a solid first step in introducing the technology to people.

Analysing medical records

Your AI then needs to be able to go a step further. Transformer LLMs are trained on large volumes of data from the internet to build the bricks of their analysis. Previously, they had a limited scope for context, or ‘context windows’, which is the size of the input you can enter to get the result. That is changing quickly – now, it’s possible to assimilate all patient notes, which is critical for health productivity.

To go even deeper, transformers have a detail called an ‘attention mechanism’ that enables the system to understand relationships between inputs, such as converting height from feet to centimeters, or recognising interactions between different medications.

For example, they can understand the relationship between 6’1” and 185 cm. This mechanism can help LLMs understand interactions of different drugs. With a greater digitisation of medical records, you can begin applying automated rules that the system can apply to patient particularities, such as medical history and allergies. These rules are coded in the existing Electronic Health Record. But it also often needs to lean on the free-hand notes in the patient’s medical file. High quality AI models can pull from this, flagging things that may have been overlooked and analysing notes that otherwise might go undiscussed.

Setting objectives for the technology

To better understand LLMs, it’s important to note that they can understand intent and context, and therefore can generate helpful responses. We’re now working to train LLMs in reasoning tasks, which involves the model creating a series of subtasks towards their goal, called ‘chain-of-thoughts’.

As they act on each substack, LLMs can update their chain-of-thought based on observations. This is important when the LLM is given access to APIs, such as booking appointments or messaging patients. These ‘skills’ can actually further train the models, setting them up for success.

The hope for the future is that these models can coordinate and arrange new patient referral bookings, keeping a patient informed, while simultaneously managing results back from diagnostics.

Implementation of LLMs

When looking to apply these advanced AI models to businesses – in this case, hospitals – the key is creating a culture of trust and transparency around the technology. This means working with both NHS staff and patients, understanding individual needs, and allowing patients to opt out. Once trust in these tools is established, they can play a significant role in streamlining operations and improving productivity in the NHS.

Author

Related Articles

Back to top button