DataFuture of AI

Why you shouldn’t trust AI with your legal affairs

By Kiley Tan, lawyer at The Legal Director

There’s no denying the buzz around AI. From streamlining admin to sharpening your marketing, it’s become the go-to tool for time-poor, budget-conscious businesses. Type a few words into a prompt box and, seconds later, you’ve got what looks like a decent draft. But when it comes to legal work, that convenience comes with significant risks.

We’re seeing more startups and scale-ups turning to tools like ChatGPT to write contracts, answer legal questions, and even create terms and conditions. And I get it. Legal services can be expensive. Time is short. AI seems like a clever workaround. But the truth is, when it comes to law, large language models are not fit for purpose.

The illusion of accuracy

Let’s start with how these tools work. Large language models are trained on huge quantities of publicly available text, mostly scraped from the internet, and mostly American. That’s already a red flag for any UK business. But the bigger issue is that these models are not trained on verified legal content, nor are they designed to understand legal context. They’re guessing what the most plausible next word should be, not what the most legally accurate one is.

This means the contract or advice you’re given may look convincing but could be wildly inaccurate. And if you’re not legally trained yourself, you probably won’t spot the flaws.

One of the golden rules we tell clients is: never ask a large language model a question you don’t already know the answer to. That’s because you won’t be able to tell whether the response is true or simply close to the truth. And in law, close isn’t good enough.

The copyright conundrum

There’s another problem lurking in the background, particularly for content creators and anyone building their own AI tools. In the UK, you’re not allowed to do text and

data mining for commercial purposes without a specific licence. It’s permitted for non-commercial research, but if you’re using that data to generate income, you could be in breach of copyright.

This matters not just to developers but to startups using US-based tools that may not comply with UK regulations. If you’re training your model on scraped data, or relying on tools that were, you could be introducing risk without realising it.

Who owns AI-generated content?

Then there’s the question of authorship. Under UK law, the author of a work must be a human being. That creates uncertainty for businesses using AI-generated content, especially in professional services.

Say you’re a copywriter using ChatGPT to draft articles or marketing copy. If AI-generated work is later ruled to have no identifiable human author, it might not be protected under copyright law. Worse still, a client could argue they’ve paid for something that isn’t legally yours to sell.

This might seem niche, but legal protections underpin commercial relationships. If ownership is murky, your contracts, your IP and your professional credibility could all take a hit.

The data problem

AI is only as good as the data it learns from. And if you’re using personal or sensitive data to train a model, or to generate customer insights, you need to tread carefully. GDPR rules apply whether the data is being processed by a person or an algorithm.

If data hasn’t been fully anonymised, or if AI is being used in a way that creates new personal data profiles, you could easily find yourself in breach of privacy regulations. And that’s before you get to the moral and ethical implications.

There’s also the issue of bias. AI will always reflect the data it’s trained on. If your customer base is skewed by geography, gender, ethnicity or income, your AI will reflect and amplify those patterns. That might not be deliberate, but it can still lead to discriminatory outcomes. It’s not enough to say the bias wasn’t intentional. If your

policies or decisions are based on flawed data, your business could be held accountable.

Contracts are not for robots

Perhaps the most dangerous use of AI in law is contract generation. It’s one of the most common things we’re seeing. A client sends over a draft agreement, and it quickly becomes clear it was written by AI. They often look professional on the surface, but underneath, they’re full of holes.

Most contracts are not available in the public domain. They’re private documents. So AI tools don’t have access to the depth or quality of content needed to produce something reliable. Even tools designed specifically for legal drafting often get it wrong. We’ve tested the gold standard systems used by legal researchers, and even they struggle to get it right.

AI can get you 80 to 85 percent of the way there, but that missing 15 percent often contains the bits that really matter. Liability clauses, termination triggers, jurisdiction language, and key definitions. Miss one of those, and you could be exposed to serious risk.

If you don’t have legal training, you won’t know what’s missing. You’ll assume the contract is sound, right up until the moment something goes wrong.

AI is a tool, not a shortcut

This isn’t to say AI has no role in the legal world. Used well, it can help with research, speed up admin tasks, and support lawyers in delivering a more efficient service. But that’s the point. It should support, not replace, professional legal advice.

Startups and scale-ups are often the most tempted to cut corners here. But the cost of getting it wrong can be far higher than the cost of getting it right from the start. Don’t outsource your legal judgement to an algorithm that doesn’t understand context, can’t interpret nuance, and doesn’t know the law of your country.

Use AI for what it’s good at. But when it comes to legal affairs, get a human involved.

Author

Related Articles

Back to top button