
Artificial intelligence is providing businesses with new opportunities to maximize productivity.ย ย
However, there is a potential for legal liability whenever a business decides to deploy the use of AI, as evidenced by Character.AI lawsuits, which allege that the deployment of AI has led to young people committing suicide.ย
Businesses thinking about deploying AI need to know the legal risks involved as well as how to protect themselves from those risks.ย ย
Using AI To Make Hiring and Firing Decisionsย
There are already many examples of what can go wrong when AI is used to make hiring and firing decisions.ย
EEOC v. iTutorGroup, Inc. et al. (2023) led to iTutorGroup paying $365,000 to settle a lawsuit which was filed by the United States Equal Employment Opportunity Commission (EEOC). The lawsuit alleged that iTutorGroupโs AI software was programmed to automatically reject male applicants who were aged 60 or older and female applicants who were aged 55 or older. Over 200 qualified applicants were rejected, according to the EEOC.ย
Mobley v. Workday, Inc. (2024) involves a man over 40 years of age suing a human resource platform. The man allegedly submitted over 100 applications and was rejected every time. The lawsuit alleges that AI software used biased training data and ended up discriminating against him and other applicants based on disability, age and race. A federal court, in May 2025, allowed the case to move forward as a class action lawsuit.ย
In 2021, a Bloomberg exposรฉ told the story of an Amazon Flex driver who was suddenly fired by an automated system after giving the company years of high-performance work. The man alleges that he was punished by an algorithm for things outside of his control, like locked apartment complexes that prevented him from completing deliveries.ย ย
The manโs appeals were met with algorithmic apathy, generic emails and silence. While this example did not lead to a lawsuit, itโs not hard to imagine a similar situation leading to legal liability in the future.ย
AIโs security and data privacy risksย
Cybercriminals can now use generative AI to create extremely convincing deepfakes. These deepfakes can then be used for corporate espionage, identity theft and phishing scams.ย
AI software may end up automatically aggregating and analyzing huge amounts of data from multiple sources. This can increase privacy invasion risks when comprehensive profiles of people are compiled without their awareness or consent.ย
AI systems which experience glitches or malfunctions, let others have unauthorized access to them, or lack robust security could lead to sensitive data being exposed. This can leave your business vulnerable to financial penalties and serious legal consequences.ย
The Risks of AI Chatbotsย
Moffat v. Air Canada led to Air Canada being found liable for misinformation that was given by an AI chatbot. A man had alleged that the companyโs AI chatbot wrongly told him that a discount was available for buyers traveling due to deaths in the family. The manโs discount application was subsequently rejected by Air Canada. The man sued, and the British Columbia Civil Resolution Tribunal found that Air Canada negligently failed to exercise reasonable care to make sure that the information its chatbot provided was accurate.ย
Garcia v. Character Technologies, Inc. is a lawsuit alleging that a 14-year-old boy committed suicide as a result of his interactions with a Character.AI chatbot.ย ย
The lawsuit alleges that the chatbot repeatedly brought up the topic of suicide after the boy expressed suicidal thoughts to it, even asking the boy if he had a suicide plan. According to the lawsuit, the boy asked the chatbot, โWhat if I told you I could come home right now?โ and the chatbot responded, โ…please do, my sweet king.โ The boy allegedly shot himself seconds later.ย
Copyright Infringement Risksย
It is risky for your business to publish AI-generated content because AI models are trained on vast amounts of copyrighted material. The models thus end up not always creating original material, and sometimes create material which is identical to or extremely similar to copyrighted content.ย
โIt was the AIโs faultโ will not be a valid argument in court if this happens to your business. Ignorance is not a defense in a copyright infringement claim.ย
You could face severe consequences if youโre hit with a copyright infringement claim, including:ย
- Damages of as much as $150,000 per workย
- Litigation costs reaching hundreds of thousands of dollarsย
Copyright Ownership Risksย
United States copyright law only protects works that human authors create. This means that if your business publishes AI-generated content, you cannot prevent your competitors from copying that content.ย
Your business will then lose any ability to:ย
- Control how the content is usedย
- License the content to others for revenueย
Content that is fully generated by AI has no copyright protection. AI-generated content that is significantly edited by humans may receive copyright protection, but the situation is murky. Original content that is created by humans and is then slightly edited or optimized by AI will usually receive full copyright protection.ย
A lot of businesses now document the process of content creation to prove that humans created the content and preserve copyright protection.ย
AI Hallucination Risksย
AI models are notorious for โhallucinatingโ factually correct information, and, as researchers have found, these hallucinations cannot be prevented.ย
AI hallucinations can lead to several problematic issues for businesses, such as:ย
- Inaccurate legal informationย
- Fabricated citations or statisticsย
- Incorrect financial informationย
- Fabricated case studies or customer testimonialsย
- False claims regarding safety features or what products are capable ofย
This can lead to significant financial impacts:ย
- Regulatory penaltiesย
- Contract disputesย
- Product liability lawsuitsย
- Securities fraud investigationsย
- Defamation claimsย
- False advertising claimsย
Mitigating AIโs Risksย
There are things businesses can do to mitigate the risks of AI, including:ย
- Cyber insurance policies: These cover things like ransomware attacks, data breaches and any resulting regulatory fines.ย
- Human oversight: Always have humans review and edit AI-generated content before publishing it.ย
- Intellectual property policies: These cover the risks that are associated with copyright infringement and AI content generation.ย
- Documenting the process of content creation: This helps establish copyright protection by proving your content was created by humans.ย
- E&O Insurance: Errors and omissions insurance covers omissions, errors and negligence when you rely on AI for the delivery of advice or services.ย
- Screening tools: Utilize solutions which scan content generated by AI for copyright issues prior to publishing the content.ย
- Product liability and general liability insurance: These cover property damage or bodily injury caused by products powered by AI.ย
- Policies for the use of AI content: Establish clear guidelines regarding how employees may use AI, when itโs necessary to consult with a lawyer, and when content requires human review.ย
- EPL Insurance: This protects companies from the risks linked with AI-reliant employment practices.ย
- D&O Insurance: This protects companies and executives from AI-related legal liability.ย



