
Blue, Red, Purple State: What California, Texas, and Colorado Teach Enterprises About AI Regulation
U.S. AI regulation is here. It’s fragmented across states, with more adopting legislation each year.
At the Federal level, Congress has passed the Big Beautiful Bill, and the Senate voted to strike a provision for a 10-year moratorium on state-level AI regulations. That’s good news for the many states, including California, Texas, and Colorado, that have already enacted AI regulations into law, which guide the design, deployment, and governance of AI systems.
Proponents of the now defunct 10-year moratorium provision—including Texas Senator Ted Cruz—argued that a federal approach to AI regulation is necessary to avoid inconsistent or conflicting state laws, which they argue should ultimately foster innovation and economic growth.
Critics argued that preventing state-level regulations could allow tech companies to ignore consumer privacy protections and potentially harm consumers by preempting state laws designed to address AI harms.
Another angle to consider is how slow Congress has been to pass regulations that can keep up with the speed of tech, which leaves states like California, Texas, and Colorado to fill the regulatory gaps and respond to their constituents.
Regardless, the EU AI Act is already in force, and global corporations doing business in Europe must comply. Furthermore, states are continuing to push ahead with AI legislation. According to Gartner, by 2030, 50% of the U.S. population will be covered under state-level AI regulations in some form, up from 18% today. That means AI leaders must be ready to navigate a patchwork of AI regulations, all while increasing speed and trust in AI innovation. All this leads to one fundamental question:
What do enterprises need to know about state-level AI legislation to avoid regulatory whiplash?
Regulatory Trends from a Patchwork of U.S. State AI Legislation
To understand the important regulatory priorities across states, start with the three states leading the way on AI regulations: Colorado, Texas, and California.
Colorado, a historically “purple” state with an equal balance of Democrats and Republicans, passed the Consumer AI Act (CAIA) which targets “high-risk” systems that influence life-altering decisions—like hiring, housing, education, and access to healthcare. It imposes a “duty of reasonable care” on both developers and deployers to prevent algorithmic discrimination. It would require impact assessments, transparency notices, and appeal rights for affected consumers. It’s the first U.S. law of its kind—and will be enforced starting on February 1, 2026.
Texas, a deep red state, has taken a different approach by focusing on AI systems developed and/or deployed by government agencies. Texas State Rep. Giovanni Capriglione said, “By balancing innovation with public interest, we aim to create a blueprint for responsible AI use that other states and nations can follow.” Governor Greg Abbot recently signed the Texas Responsible AI Governance Act (TRAIGA) into law on June 23, 2025. Instead of a focus on classifying “high-risk” systems like Colorado, TRAIGA concentrates on specific prohibited uses. For example, prohibitions on manipulation, “social scoring,” and biomarkers. TRAIGA would also require healthcare providers to disclose to patients that they are interacting with an AI system by the date of service or treatment. So far, Texas is the only state-level legislation that includes an “AI Sandbox” to promote innovation in a safe environment, allowing developers to test before launching a system that would be subject to TRAIGA rules.
Then there’s blue California, a state that has voted Democrat in presidential elections since 1992. Often a regulatory pioneer, the state passed AB 2013 in September 2024, along with a slew of other AI laws like AB 3030 (GenAI healthcare disclosure), SB 942 (GenAI content referencing), and SB 1120 (healthcare insurance requirements for using AI). AB 2013 is designed to provide transparency into GenAI. It requires AI developers to post information on the data used to train their AI systems or services on their websites. The other bills address a variety of challenges, aiming to crack down on deep fakes, requiring AI watermarking to protect children and workers and combating AI-generated misinformation. But not all AI-related bills have made it through the legislature—Governor Newsom vetoed a separate bill, SB 1047, which had opposition from the AI industry.
What unites these three states is a common acknowledgment that AI can’t govern itself. While Colorado aims to avoid algorithmic discrimination by requiring disclosures for high-risk AI, Texas will prohibit intentional discrimination, manipulation, specific harmful uses, and government disclosures; and California’s AB 2013 will require the disclosure of information about generative AI training datasets. Yet, how each state responds—what they define as “risky,” who they hold responsible (developers, deployers, and/or distributors), and how innovation is protected (transparency, documentation, sandboxes)—varies.
And state divergences will be the real challenge for AI leaders.
How Do These Compare to the EU AI Act?
U.S. state-level regulations, although distinct, are influenced by the EU AI Act, particularly in their shared emphasis on a risk-based approach and the goal of mitigating algorithmic bias and discrimination.
The EU AI Act is currently the most comprehensive legal framework for AI. It employs a multi-tiered, risk-based approach, categorizing AI systems into four levels: unacceptable risk, high risk, limited risk, and minimal risk. Enacted in 2024 and in force from February 2025, the EU AI Act imposes strict obligations on high-risk categories—especially those in finance, healthcare, law enforcement, and employment. It also reaches beyond EU borders, applying to any company whose AI systems touch European citizens.
Compared to U.S. state laws, the EU AI Act is broader in scope, tougher in enforcement, and clearer in structure. It also introduces transparency mandates for general-purpose AI models and imposes fines that scale with company size and revenue. Texas, Colorado, and California draw inspiration from this model—especially in their risk-based frameworks—but none yet match its regulatory depth or extraterritorial muscle.
Why Should Enterprises Care?
You should care because what’s happening in these three states is just the beginning. Whether at the state or federal level, AI laws are on the immediate horizon. That means your AI models, algorithms, and decision engines will soon be judged by a checkerboard of legal standards.
What’s lawful in Austin may be banned in Sacramento. What’s required in Denver may be irrelevant in Boston. And even if federal AI regulation materializes, it’s unlikely to override state statutes anytime soon.
What AI Leaders Need to Do Now
Start by understanding that this is not just a legal or compliance function—it’s a cross-functional shift in how you manage risk, innovation, and trust.
You need visibility: Ask things like what models are in production. What decisions are they making? Where are they being used—and by whom? Inventorying your AI portfolio and assessing model risk is now table stakes.
You need control: Establish a governance framework that defines who’s accountable for which systems, under what conditions, and according to which standards. This isn’t about slowing innovation—it’s about protecting it.
You need assurance: That means documentation. Impact assessments. Lineage tracking. Audit logs. And the ability to demonstrate, in plain English, that your AI systems are fair, explainable, and properly monitored—because you’ll be asked to prove it.
Whether you align to NIST’s AI Risk Management Framework, ISO 42001, or your own internal standards, the point is this: AI governance is no longer optional. It’s the foundation for navigating this new era of state-by-state regulation—and for scaling AI effectively and responsibly in a world where AI risk is now regulated.
Final Thought
Red, Blue, or Purple state doesn’t matter. What matters is that AI regulation in the U.S. is real and enforceable. If your organization isn’t prepared now, it’s behind.
Now’s the time to embed AI governance into your operations to accelerate your AI innovation engine and stay ahead of a regulatory tide and competitors that won’t be turning back.
Pete Foley is CEO of ModelOp, the leading AI lifecycle automation and governance software, purpose-built for enterprises. It enables organizations to bring all their AI initiatives—from GenAI and ML to regression models—to market faster, at scale, and with the confidence of end-to-end control, oversight, and value realization.