Future of AIAI

How to avoid falling foul of AI hallucinations in your legal practice

By Fraser Dear, Head of AI and Innovation, BCN

For the legal industry, AI offers incredible potential.   

With vendors such as Microsoft continuing to rollout an ever expanding and improving suite of advanced AI services and solutions, legal practices are able to achieve new levels of process efficiency across everything from case management to document collation and compliance verification.  

AI is your ‘copilot’ not a replacement colleague  

When used in the right way, these tools can pave the path to major productivity gains. But while AI can be your ‘copilot’ to help bring down administrative burdens and is essential for legal practices to use so they remain competitive, it’s not there to replace the cognisance and awareness of a legal professional. 

Why? Well, despite all the advances in this technology space, AI isn’t able to replicate human thinking and creativity (yet). There is a risk that if you don’t understand or use AI correctly, there are some issues, like hallucinations you might fall foul of, and you could even find yourself with a miscarriage of justice. 

How do AI hallucinations occur in legal contexts?  

In simple terms, when AI is not implemented and used correctly hallucinations can occur. An AI hallucination is where AI presents false, misleading or fabricated information as fact. In a legal context, that can naturally be incredibly problematic, with hallucinations arising in several different ways: 

  1. Inaccurate or irrelevant data
    If a legal practice allows an AI model to extract information from both the firm’s internal case management system and the wider web, it could end up using unreliable sources. Let’s say you ask it about a case involving Bob and Alice – if a blog has been published online about different individuals with the same names, then the AI might reference this rather than the case you intended. 
  2. Conflicting or outdated information
    Even if AI models have been told to only draw upon information in your case management system, issues can still arise. There might, for example, be multiple versions of key documents (e.g. v1, v2, v3, etc.). In such instances, the AI may not know which version to use, and draw information from outdated or incomplete files, again resulting in incorrect or incomplete outputs.
  3. Poorly framed prompts
    Equally, in instances where your AI is leveraging the right datasets, it can still provide wrong, vague or incomplete answers if you don’t define your prompts and questions clearly enough. Let’s say you ask it how Bob and Alice were involved in a car accident – if there are multiple Bobs and Alices involved in the dataset, then the AI may provide information about the wrong individuals. Unless you prompt your AI solution with explicit parameters (or this is done on your behalf), AI could provide the wrong information.
  4. Fabrication
    Finally, without the right guardrails in your prompts, AI may completely fabricate information. You might ask it what Alice and Bob did after the car accident, only for it to tell you that they went for a walk in the park – even if they didn’t. 

In a legal context, any of these hallucinations can have severe consequences. It’s essential to mitigate the risks of them arising, or you could begin to see false or incorrect information creeping into cases that may compromise their integrity entirely. 

High profile examples already emerging 

Several high profile examples of this have already begun to emerge. In an £89 million damages case against the Qatar National Bank earlier this year, 18 out of 45 case-law citations submitted by the claimants were found to be fictious.  

In response, the UK high court has urged senior lawyers to address the misuse of AI in their practices. And it’s vital that they do so. Indeed, hallucinations can irreparably damage the reputations of legal firms, and even potentially result in a miscarriage of justice. Used responsibly and correctly, AI will help firms stay competitive, and working with an AI expert will ensure this is the case.    

Preventing AI hallucinations from arising in legal cases  

To adhere to these calls, and avoid falling foul of the potentially serious implications, there are several steps that legal practices can take. 

First, it’s important to understand that AI is a tool for legal professionals. Microsoft Copilot has been named Copilot for a reason. It’s not there to replace people – it’s there to help them reduce administrative burdens.   

Legal practices need to ensure that this is understood and check that AI is being used in the right way. To achieve this, firms need to develop and implement AI usage policies, outlining which tools are appropriate for use and which ones are not.  

Monitoring in-firm usage is key 

According to Microsoft, 78% of AI users are bringing their own AI tools to work. For legal practices, that’s potentially a lot of case managers using unsanctioned tools that aren’t fit for purpose. For example, some publicly available AI tools may draw upon information outside your legal case management database, thereby increasing the chance of hallucinations. Also, sharing your data with the outside world can mean that AI is being trained, potentially, on your confidential or sensitive data.  

Which tools are being used and how? 

To reduce this risk, firms need to ensure approved tools are used, such as dedicated legal AI agents, or develop proprietary ones with an IT solutions partner for their practice using their own validated data sources. These specialised tools are built to uphold legal guardrails right out the box, only drawing data from authorised sources, and even narrowing analyses by using case-specific identifiers such as unique reference numbers. 

With that said, reducing hallucinations isn’t just a case of ensuring employees are clear on which tools they can and can’t use. Equally, they must be clear on exactly how they should be using them. Here, education and training are critically important. Some legal practices are already leading the way in this domain, bringing AI leads on board to educate their workforces on AI, showing them how to mitigate hallucinations through effective prompt engineering. 

Regularly audit AI systems and always fact check outputs 

These efforts will go a long way in helping to mitigate the potential risks of AI in law. However, ensuring its effective usage is an ongoing process, and any system must be continually evaluated and audited.  

That should happen at the company level, but also the user level. Indeed, the legal professionals using these systems should always fact check outputs themselves, ensuring references are valid and from sources that can be trusted. If a source isn’t provided, ask where the information has been found. If the AI can’t tell you, question what you’re seeing, because it might be fiction.  

Validating AI systems is crucial 

Checking and validating AI systems is crucial in legal settings where the consequences can be severe. Get it right, and AI can drive huge efficiency gains, accelerate case reviews, and enhance legal analysis and documentation. Get it wrong, however, and AI hallucinations stemming from invalid data or poor prompts could cause havoc. 

Protect your reputation by engaging experts 

All it takes is one hallucination and a single piece of false information getting mixed up in your legal case to lead to major reputational damages or a potential miscarriage of justice. Those are grave consequences. So, to make sure it doesn’t happen, set clear policies around the use of your own proprietary AI and external AI, make sure people follow them, provide guidance on effective AI usage, and continually audit systems.  If you don’t have the AI expertise in-house to do this, work with AI experts to keep you on the right track.  

Author

Related Articles

Back to top button