AI & Technology

The Case For Building Custom LLMs To Support Law Firms

By Michael McCready, Managing Partner of McCready Law

Your firm, and its employees, areย probably usingย generative artificial intelligence (GenAI),ย whether or notย you realize it.ย Theyโ€™veย been using it for a while. Oneย industry surveyย found that 79 percent of legal professionals used AI in 2025, outpacing professionals in other industries.ย ย 

Sometimes, these use cases are intentional and strategic. For instance, using AI to automate routine tasks such as document review, contract drafting,ย billingย and scheduling yields efficiency gains and greater profit potential. Other times,ย itโ€™sย an unmitigated disaster. Just ask theย dozens of lawyers getting into troubleย for relying on generative AI without verification.ย 

The problemย isnโ€™tย the technology.ย Itโ€™sย how we use it.ย ย 

Many lawyers are using off-the-shelf models trainedย onย general knowledge rather than legal specific data or their own internal firm data. Because of this, general AI models risk encoding historical biases because they are trained on precedent and case law that may reflect outdated or discriminatory norms.ย 

In other words, making AI a part of your practiceย isnโ€™tย as simple as buying a chatbot subscription. The true leaders and innovators will train their own internal models thatย leverageย proprietary data to ensure accuracy,ย contextย and alignment with current legal standards as well as firm values.ย 

Why Precedentย Canโ€™tย Always Improve Performanceย ย 

Law is built on stare decisis. We stand byย thingsย decided. We lookย toย the past to guide the future.ย ย 

However, when you feed anย LLMย raw, unfiltered case law, you areย essentially askingย it to think like a judge from the 19th or 20th century, with all the gender exclusion, racial inequities and socioeconomic biases included.ย ย 

Ask an AI model to create a picture of a lawyer arguing before a judge. You will get a young, white, male in a suit arguing before an old, white, male in a robe.ย 

Anย expansive study of AIโ€™s implications in the courtroomย found that โ€œAI systems used in legal decision-making can exhibit racial and gender disparities, often reflecting biases present in the data on which they are trained.โ€ For example,ย researchers at Tulane examined over 50,000 convictions in Virginiaย in which judges used AI risk-assessment tools. They found that while AI sentencing cut jail time for low-risk offenders overall, racial bias persisted, and black defendants were treated less favorably than white defendants despite identical risk scores.ย 

Training an LLM is like raising a child. Racism and bias are not born traits; they are learned behaviors. Like a child, an LLM mimics what it sees. If itย observesย bias in the historical record, it will replicate that bias in its output; if it is trained on the values of the law firm, it will mimic those values in its output.ย 

This creates legal and ethical challenges that can erode client outcomesย andย disrupt company culture.ย 

At McCready Law, diversity and progressiveness are core values, and weย canโ€™tย allow an unfamiliar algorithm to dictate our approach to client advocacy. Training our own models lets us get the best from AI while selecting and filtering historical data to actively promote fairness and our values, rather thanย perpetuateย historical inequities.ย 

Strategies for Internal Trainingย 

If training your own LLM was an obviousย option, every law firm would have done it already.ย ย ย 

Many managing partners freeze when considering customization in their AI integration plans. Theyย arenโ€™tย aware of the options, costs, or potential upkeep.ย ย 

We approach this by separating our AI operations into two distinct data buckets protected by strict internal controls.ย 

Bucket 1: The Micro Analysisย 

This involves training the AI on a single client file. We can feed the model thousands of pages of medical records,ย depositionsย and police reports for a specific case.ย ย 

Then we can query the model toย identifyย inconsistent statements in a deposition or to summarize a medical timeline. Because the AI is restricted only to thisย case’sย data, the hallucinations drop to near zero. Itย isn’tย a guess.ย It’sย retrieving information that our team can assess.ย ย 

Bucket 2: The Macro Analysisย 

Our firm has been practicing for more than 25 years.ย Weโ€™veย worked on many cases and collected extensive data, including records of thousands of settlements and verdicts.ย ย 

We train an LLM on our own history to ask strategic questions: โ€œWe have a case with a torn meniscus against Insurance Carrier X. How has this specific adjuster responded to this specific injury argument in the past five years?โ€ย 

A public LLM can tell you the law. Only a custom LLM can tell you your own institutional knowledge and activate it to amplify your firmโ€™s impact and outcomes.ย ย 

Keeping these bucketsย separateย isย really important. Mixing these data sets is dangerous. Two years ago, in myย initialย enthusiasm, I considered consolidating everything into a single central AI brain.ย ย ย 

Fortunately, we paused.ย 

Data governance is an important initiative when you get this far into developing your own tools. If you centralize all data without controls, your team risks exposing sensitive internal data to the entire firm.ย 

Simply put, if the AI has access to HR files and partner emails, it will respond to sensitive queries, such as employee performance or salaries, potentially revealing confidential information.ย 

The second most important aspect of AI is data governance.ย The first, you may ask? Change management. Without strategic planning with AI, you run the risk of investing in systems that create confusion or resistance rather than value.ย Only 14% of organizationsย have a change management strategy, but itโ€™sย a crucial step to ensure an ROI thatย benefitsย your team, theirย workย and your business.ย 

The Verdictย ย 

Unlike most enterprise software solutions, GenAIย isnโ€™tย just software youย purchase, setย upย and forget.ย Itโ€™sย more like hiring a new employee thanย purchasingย software.ย ย 

Youย wouldn’tย hire a promising law school graduate and let themย argueย a case without mentorship,ย trainingย and ethical guidance. We must treat our AI agents the same way we treat our team, teaching them the law, ourย valuesย and specific methodologies.ย ย 

GenAI is the least capableย itโ€™sย ever going to be. We need to elevate our capabilities through technology, developing theย skillsย and implementing the processes that ensure it is safe and effective for our firms and our clients.ย ย 

This process startsย withย creating a custom product for your firm.ย 

Author

Related Articles

Back to top button