
Your firm, and its employees, are probably using generative artificial intelligence (GenAI), whether or not you realize it. They’ve been using it for a while. One industry survey found that 79 percent of legal professionals used AI in 2025, outpacing professionals in other industries.
Sometimes, these use cases are intentional and strategic. For instance, using AI to automate routine tasks such as document review, contract drafting, billing and scheduling yields efficiency gains and greater profit potential. Other times, it’s an unmitigated disaster. Just ask the dozens of lawyers getting into trouble for relying on generative AI without verification.
The problem isn’t the technology. It’s how we use it.
Many lawyers are using off-the-shelf models trained on general knowledge rather than legal specific data or their own internal firm data. Because of this, general AI models risk encoding historical biases because they are trained on precedent and case law that may reflect outdated or discriminatory norms.
In other words, making AI a part of your practice isn’t as simple as buying a chatbot subscription. The true leaders and innovators will train their own internal models that leverage proprietary data to ensure accuracy, context and alignment with current legal standards as well as firm values.
Why Precedent Can’t Always Improve Performance
Law is built on stare decisis. We stand by things decided. We look to the past to guide the future.
However, when you feed an LLM raw, unfiltered case law, you are essentially asking it to think like a judge from the 19th or 20th century, with all the gender exclusion, racial inequities and socioeconomic biases included.
Ask an AI model to create a picture of a lawyer arguing before a judge. You will get a young, white, male in a suit arguing before an old, white, male in a robe.
An expansive study of AI’s implications in the courtroom found that “AI systems used in legal decision-making can exhibit racial and gender disparities, often reflecting biases present in the data on which they are trained.” For example, researchers at Tulane examined over 50,000 convictions in Virginia in which judges used AI risk-assessment tools. They found that while AI sentencing cut jail time for low-risk offenders overall, racial bias persisted, and black defendants were treated less favorably than white defendants despite identical risk scores.
Training an LLM is like raising a child. Racism and bias are not born traits; they are learned behaviors. Like a child, an LLM mimics what it sees. If it observes bias in the historical record, it will replicate that bias in its output; if it is trained on the values of the law firm, it will mimic those values in its output.
This creates legal and ethical challenges that can erode client outcomes and disrupt company culture.
At McCready Law, diversity and progressiveness are core values, and we can’t allow an unfamiliar algorithm to dictate our approach to client advocacy. Training our own models lets us get the best from AI while selecting and filtering historical data to actively promote fairness and our values, rather than perpetuate historical inequities.
Strategies for Internal Training
If training your own LLM was an obvious option, every law firm would have done it already.
Many managing partners freeze when considering customization in their AI integration plans. They aren’t aware of the options, costs, or potential upkeep.
We approach this by separating our AI operations into two distinct data buckets protected by strict internal controls.
Bucket 1: The Micro Analysis
This involves training the AI on a single client file. We can feed the model thousands of pages of medical records, depositions and police reports for a specific case.
Then we can query the model to identify inconsistent statements in a deposition or to summarize a medical timeline. Because the AI is restricted only to this case’s data, the hallucinations drop to near zero. It isn’t a guess. It’s retrieving information that our team can assess.
Bucket 2: The Macro Analysis
Our firm has been practicing for more than 25 years. We’ve worked on many cases and collected extensive data, including records of thousands of settlements and verdicts.
We train an LLM on our own history to ask strategic questions: “We have a case with a torn meniscus against Insurance Carrier X. How has this specific adjuster responded to this specific injury argument in the past five years?”
A public LLM can tell you the law. Only a custom LLM can tell you your own institutional knowledge and activate it to amplify your firm’s impact and outcomes.
Keeping these buckets separate is really important. Mixing these data sets is dangerous. Two years ago, in my initial enthusiasm, I considered consolidating everything into a single central AI brain.
Fortunately, we paused.
Data governance is an important initiative when you get this far into developing your own tools. If you centralize all data without controls, your team risks exposing sensitive internal data to the entire firm.
Simply put, if the AI has access to HR files and partner emails, it will respond to sensitive queries, such as employee performance or salaries, potentially revealing confidential information.
The second most important aspect of AI is data governance. The first, you may ask? Change management. Without strategic planning with AI, you run the risk of investing in systems that create confusion or resistance rather than value. Only 14% of organizations have a change management strategy, but it’s a crucial step to ensure an ROI that benefits your team, their work and your business.
The Verdict
Unlike most enterprise software solutions, GenAI isn’t just software you purchase, set up and forget. It’s more like hiring a new employee than purchasing software.
You wouldn’t hire a promising law school graduate and let them argue a case without mentorship, training and ethical guidance. We must treat our AI agents the same way we treat our team, teaching them the law, our values and specific methodologies.
GenAI is the least capable it’s ever going to be. We need to elevate our capabilities through technology, developing the skills and implementing the processes that ensure it is safe and effective for our firms and our clients.
This process starts with creating a custom product for your firm.

