
We are living in strange times. In 2023, lawyers in New York faced professional embarrassment after it was discovered they had submitted a court filing riddled with citations of non-existent cases. The cases had been āhallucinatedā by ChatGPT. The incident, which surprisingly is not the only one of this kind, got me thinking about the potential limitations of AI-generated legal advice.
A recent survey showed nearly half the population of the UK use online health information to self-diagnose. Now, with the rise of tools like Perplexity and ChatGPT, will people and businesses turn to AI for legal advice? If professional lawyers are turning to this tool for assistance, surely people and businesses will start asking AI for legal advice?
To be clear, Iām not a technophobe and certainly not against the use of new technologies where they can aid efficiency. AI tools will certainly be useful in legal services. AI can analyse vast amounts of legal data in seconds, a job that would take
human lawyers days or weeks. For businesses looking to reduce legal costs and improve efficiency, I understand the appeal of AI-driven legal assistance. However, while AI can be a powerful research tool and streamline certain processes, it cannot replace human expertise in interpreting the law or making strategic decisions.
AI in (hypothetical) action
How much you can rely on AI depends on type of legal issue at hand. AI is well-suited for straightforward tasks, such as generating legal templates or summarising case law. A business seeking a standard non-disclosure agreement (NDA) or a simple employment contract may find AI-generated drafts a good starting point. However, when dealing with nuanced or high-stakes matters like mergers and acquisitions, regulatory compliance, or employment disputes ā getting your legal advice from an AI is extremely risky.
Take the world of corporate law. AI might suggest contract clauses based on past agreements, but it cannot assess the specific business dynamics or strategic implications of a deal. For example, in an acquisition, lawyers do not just draft contracts ā they negotiate terms and identify regulatory risks. AI lacks the ability to think critically about these factors and a business that relies solely on machine-generated contracts might find itself locked into unfavourable terms.
Fintech presents another area where AI-generated legal advice can be problematic. Given the fast-changing nature of financial regulations, an AI might provide guidance
that is outdated or jurisdictionally incorrect. AI systems like ChatGPT are trained on all the material that is freely available on the internet. Itās a lot of useful material, but itās not sufficient for staying abreast of highly technical legal matters in a fast-evolving domain. A fintech start-up using AI to determine its compliance obligations under anti-money laundering laws could be misled if the AI has not been updated with recent legislative changes.
In addition, regulatory frameworks often involve ambiguities that require careful legal interpretation. The way AI thinks is different from humans, even if its response appears to replicate our language. As AI is relying on pattern recognition rather than legal reasoning, this sort of interpretive challenge is beyond its current skillset.
Employment law is another area that highlights the risks of getting your advice from AI. While AI can generate employment contracts, it may misinterpret key legal distinctions. For example, recent cases involving gig economy workers have shown that employment status is not always clear-cut, and businesses that misclassify workers could face lawsuits and financial penalties. Once again, AI lacks the human judgment needed to address these complexities and legal nuances.
The hybrid future?
Another obvious issue to highlight is the lack of accountability. If a lawyer makes a mistake, they have professional and ethical obligations to rectify. An AI system is not really designed to recognise its own mistakes and is not going to act on them without prompting and guidance from a human being. More importantly, an AI cannot be held legally responsible for incorrect or misleading advice, so there is no recourse for malpractice claims.
Over the next decade, AI is expected to improve in several key areas, including real-time legal updates, more precise contextual analysis, and enhanced compliance monitoring. AI tools may soon be capable of flagging potential legal risks in contracts or identifying emerging litigation trends, making them invaluable aids to legal professionals. However, I donāt anticipate AI replacing lawyers and those that rely on AI for their legal advice will be taking on a lot of risk.
You might Google your medical symptoms to point you in the right direction, but you would never diagnose yourself with a serious illness without seeing a trained medical professional. Similarly, for legal advice, human judgement and trained expertise will never be replaced. The future of legal advice is likely to involve a hybrid approach, where AI acts as an efficient research assistant while human lawyers provide strategic oversight.