
The growing use of artificial intelligence in insurance operations, particularly in underwriting, claims handling, fraud detection and customer engagement, has prompted growing regulatory attention. At the national level, the most significant policy development is the release of the National Association of Insurance Commissionersā (NAIC) Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (model bulletin), finalized in late 2023. This bulletin provides a structured framework for states and insurers to promote responsible AI governance rooted in legal compliance, transparency and consumer protection and reflects the emerging approaches to ethical considerations in the use of AI in insurance.Ā
Some common themes across regulatory and industry standard expectations include:Ā Ā
- Recognition of an obligation to use new technologies carefully and transparently; adverse coverage decisions must still be explainable with a specific rationale for denial.Ā
- The need for targeted and sufficient testing of new predictive models, both in terms of directly assessing model outputs and tracking loss ratios over time, for systemic impacts or gaps.Ā
- A need for governance frameworks that include risk assessments and ongoing oversight, including written policies and testing procedures, and documented certifications where possible.
The NAIC Model BulletinĀ
The model bulletin is driving a more standardized approach to AI governance in insurance, including addressing the use of AI in the regulated activities such as underwriting, pricing, claims handling, fraud detection and coverage determination. The bulletinās scope includes AI assistance with any regulated function, so the regulatory model could apply to insurance practices for at least life, health, auto, homeowners, disability, long-term care and annuities.Ā
The model bulletin directs insurers to establish a comprehensive AI systems program to ensure that their use of AI tools is lawful, fair, accountable and nondiscriminatory. While not binding, the bulletin has been rapidly adopted by over two dozen states, currently making it the closest thing to a national standard. The model bulletin suggests that insurers using AI should include the following as part of their AI governance:Ā
- Documented Governance: Covering the development, acquisition, deployment and monitoring of AI tools, including those sourced from third-party vendors.Ā
- Transparency and Explainability: Be able to explain how the AI systems function, including how inputs lead to specific outputs or decisions.Ā Ā
- Fairness and Nondiscrimination: Evaluate AI systems for potential bias and unfair discrimination in regulated processes such as claims, underwriting and pricing ā and proactively address them.Ā
- Risk-Based Oversight: AI tools used in high-stakes decisions (e.g., coverage denials or rate setting) require more robust documentation, controls and testing than tools used for back-end operations or consumer outreach.Ā
- Internal Controls and Auditability: Use independent audits, validation and regular reviews of AI model performance to demonstrate compliance and accuracy over time.Ā
- Third-Party Vendor Management: Insurers are responsible for AI systems used in their operations, even if developed externally; they must show due diligence and contractual safeguards for AI services.Ā
Insurers with existing AI governance frameworks that rely on established standards such as the NIST AI Risk Management Framework should not need to duplicate theirs, although they may need to adapt or expand some areas. The model bulletin allows that AI systems programs can āadopt, incorporate, or rely uponā such frameworks to streamline compliance.Ā
While the model bulletin sets out a nationwide framework for the responsible use of AI, several states have moved ahead with their own measures. Among the most notable are California, Colorado and New York, each of which has adopted a distinct approach to regulating how insurers may deploy algorithmic tools in decision-making.Ā
California (SB 1120)Ā
In 2024, California enacted Senate Bill 1120, one of the most specific and enforceable restrictions on insurersā use of AI in health care decision-making. Since it went into effect January 1, the law prohibits health insurers from denying, delaying or modifying coverage for medically necessary treatment based solely on an algorithm or automated tool. Key features include:Ā Ā
- Licensed Medical Review Requirement: Any adverse benefit determination that affects a patientās care must be individually reviewed by a licensed clinician.Ā Ā
- Transparency and Disclosure: Insurers must inform consumers if AI tools contributed to a coverage determination and ensure that appeals processes are accessible and timely.Ā
- Alignment with Existing Nondiscrimination Requirements: The law reinforces Californiaās existing consumer protection standards, clarifying that algorithmic efficiency cannot justify violating fairness obligations under the Insurance Code.
SB 1120 reflects a growing recognition that while AI can assist with administrative efficiency, it should not be allowed to preempt or override individualized determinations.Ā Ā Ā
Colorado (SB21-169)Ā Ā
Colorado’s Senate Bill 21-169, which aims to prevent algorithmic discrimination, marked one of the first state-level legislative efforts to directly regulate the use of AI and big data analytics in insurance practices. Enacted in 2021 and fully in effect by 2023, the law prohibits insurers from using external consumer data and information sources (ECDIS), algorithms and predictive models in ways that result in unfair discrimination based on protected characteristicsāincluding race, color, national origin, religion, sex, disability, gender identity and sexual orientation. Highlights include:Ā
- Testing and Documentation: Insurers must test and document whether their algorithms result in unfair discrimination.Ā Ā
- Applicable Lines of Insurance: Unlike Californiaās health focus, SB21-169 applies across life, health, property and casualty lines.Ā
- Public Reporting: The Colorado Division of Insurance is collecting information on industry practices and issuing bulletins outlining expectations for compliance and disclosure.Ā
Coloradoās legislation is a shift toward outcomes-based regulationāplacing the burden on insurers not only to avoid intentional discrimination but to detect and mitigate disparate impacts that may emerge from seemingly neutral data-driven models.Ā
New York (Circular Letter No. 7, 2024)Ā
New Yorkās Department of Financial Services issued Insurance Circular Letter No. 7 (July 11, 2024), which communicated the departmentās expectations of AI transparency and justification from insurers operating in the state who use AI, machine learning or algorithmic systems in any part of their operations. The letter outlines several key regulatory principles:Ā Ā
- Compliance with Antidiscrimination Laws: Insurers must ensure that any use of AI does not violate New Yorkās other insurance law provisions, including prohibiting unfair discrimination in underwriting and claims decision processes.Ā
- Explainability and Accountability: Insurers must maintain internal documentation that explains how an AI model functions, including its input data, assumptions and how outputs influence decisions.Ā Ā
- Regulator Access and Review: Upon request, insurers must provide the department with detailed information about their AI systems, including vendor-supplied tools, risk mitigation controls, and testing for fairness and accuracy.Ā
New Yorkās approach is significant because it bridges both technical and legal oversight, requiring insurers to test for bias and to maintain robust internal records for regulatory challenge and public scrutiny.Ā
Together, California, Colorado and New York illustrate three different but complementary regulatory models. California seeks to ensure human review and clinical accountability (in health care). Colorado targets data ethics and antidiscrimination across all lines of insurance. New York focuses on transparency, explainability and legal defensibility in the use of AI.Ā Ā
These actions are likely to influence both legislative proposals in other states and the federal debate on the appropriate role for AI in insurance services.Ā Ā
Real-World ImpactĀ
Despite promises of greater efficiency, the real-world use of AI and algorithmic tools in insuranceāespecially in health careāhas led to multiple high-profile allegations of harm, regulatory intervention and litigation. These cases reveal the challenges when sometimes opaque systems influence critical decisions like coverage denials or early discharge from care.Ā
- UnitedHealth & NaviHealthās nHāÆPredict: A class-action suit filed in the U.S. District Court for Minnesota alleges that UnitedHealth and its subsidiary naviHealth used the nHāÆPredict algorithm to systematically deny rehabilitative care for Medicare Advantage patients, even though the tool reportedly had error rates as high as 90%. Most patients who appealed won a reversal, but only a small fraction appealed despite cases highlighting the possible severity. In February 2025, a federal court allowed the class action to proceed on breach of contract and good faith claims across multiple states.Ā
- Cignaās PxDx Algorithm: In July 2023, plaintiffs sued Cigna in federal court (E.D. California), claiming its PxDx algorithm was used to mass-deny thousands of claims without proper physician review. Relatedly, ProPublica reported that doctors were dismissing claims without ever reading case files.
- Humana: A class action suit was filed against Humana in the Western District of Kentucky in late 2023, accusing the insurer of using an undisclosed AI algorithm to deny rehabilitation care for seniors enrolled in Medicare Advantage. Plaintiffs charged that the insurer’s algorithm overrode physician judgment and violated medical necessity standards.Ā
In addition to lawsuits and regulatory actions, medical associations and advocacy groups have also begun publicly criticizing the rise of automated denials in health care. The American Medical Association and World Health Organization have both issued statements warning that AI-driven utilization management tools may violate the ethical duties of care by interfering with clinician decision-making and delaying necessary treatment.Ā Ā
Industry Response and Compliance ChallengesĀ
Under regulatory scrutiny and public attention, insurers are reconsidering and adapting their approach to AI. While many companies continue to emphasize the value of AI for improving efficiency, detecting fraud and speeding up administrative processes, they also acknowledge growing pressure to demonstrate that these systems operate fairly and lawfullyāespecially when they affect patient care or access to coverage.Ā Ā
Many insurers defend their use of AI and algorithmic tools as a necessary evolution in a data-rich, cost-constrained health care landscape. AI is credited with:Ā
- Streamlining prior authorization decisions.Ā
- Detecting billing irregularities or potential fraud.Ā
- Accelerating internal workflows and the customer experience.Ā
- Offering predictive modeling for population health and risk adjustment.Ā
Insurers argue that these technologies can greatly reduce administrative costs and improve turnaround times for patientsāprovided they are implemented with appropriate safeguards. Tools like automated document processing, speech-to-text for clinical notes and AI-assisted customer support are also often cited as relatively low-risk, high-reward applications.Ā Ā
In response to growing legal and regulatory attention, many insurers recognize that AI-assisted coverage determinations should be reviewed by licensed clinicians and assert that their processes include this protection. Some have instituted hybrid models where AI tools generate preliminary recommendations or risk scores but leave final decisions with human reviewers who are expected to exercise professional discretion.Ā
Key takeaways for insurers should include the need for robust AI governance that is cross-functional, risk-based and guided by clear policies. Vendor screening, impact assessments and audits are all critical tools in effective oversight, and all processes must be well documented.Ā
Ongoing TensionĀ
Despite evolving expectations and performance, tensions remain with the operational realities of AIās use in insurance applications. Some AI tools used in actuarial or clinical risk models are seen as too complex to be easily interpretable by staff, raising concerns about whether insurers can truly explain and justify outcomes to regulators or consumers. In addition, the fact that many insurers license AI tools from third-party vendors limits their insight into model design or training data. While the NAIC and state regulators assert that liability cannot be outsourced, some insurers express concern over the feasibility of validating āblack boxā vendor tools.Ā
Without clear legal limits, insurers may find it difficult to determine where AI regulation begins and ends. Some worry that overly broad enforcement could stifle innovation or penalize legitimate efforts to modernize processes. Building AI system programs, maintaining documentation and conducting fairness audits all carry operational costs that are especially impactful for small and mid-sized insurers. When considering how to control impacts on consumers, these costs must also be included to scale compliance without discouraging beneficial advancements.Ā
ConclusionĀ
AI in insurance is no longer a speculative risk; it is an operational reality with measurableāand sometimes life-alteringāconsequences. As in many other applications, automation without accountability can lead to harm, so human oversight, transparency and procedural fairness hold important roles in the decision-making process. Insurers are under growing pressure to reconcile the speed of technology with the slower, more deliberate demands of accountability. With frameworks like the NAIC Model Bulletin and states leading the charge with tailored rules, the future of AI in insurance will depend not just on technical performance but on public trust, ethical design and a demonstrable commitment to fairness.Ā