In July, the Senate voted almost unanimously to remove a 10-year moratorium on state enforcement of artificial intelligence (AI) regulations. Originally embedded in a broader Republican-led domestic policy bill, the moratorium would have blocked states from creating and enforcing their own rules around AI until the mid-2030s.
The move to scrap this provision might seem like a significant bi-partisan moment in American tech policy. But when it comes down to brass tacks, this changes very little and underscores a much more pressing issue. More than a regulatory problem, we have an AI literacy problem.
The illusion of urgency
At the heart of the Senate’s decision is a theoretical question: should states be allowed to regulate AI technologies independently or wait for the federal government to keep up? Lawmakers appeared to reaffirm the importance of local control and timely oversight. But what this debate misses, and what many of its loudest advocates likely misunderstand, is that most of the meaningful regulation of AI is already happening.
In highly sensitive sectors like healthcare and finance, companies are already bound by existing data privacy, safety, and ethical rules. These industries are governed by laws like HIPAA, the Fair Credit Reporting Act, and the SEC’s rules on algorithmic trading. All of which include indirect but effective restrictions on the use of AI.
Whether a predictive model is powered by traditional software or a large language model (LLM), the legal expectations around safety, bias, and fairness are already well-defined. Here are a few examples of existing safeguards in place:
1. HIPAA
The Health Insurance Portability and Accountability Act (HIPAA) doesn’t mention AI, but its requirements effectively regulate how AI can be used in healthcare. For example, if a hospital uses an AI model to recommend treatments, it must ensure that any data used or shared complies with HIPAA’s privacy and security standards. Essentially, AI developers working in healthcare are already navigating strict rules on how data is collected, stored, and processed, regardless of whether the AI tool itself is regulated.
2. NIST AI Risk Management Framework
The National Institute of Standards and Technology (NIST) has created a widely adopted AI Risk Management Framework (RMF), which outlines best practices for designing trustworthy, fair, and secure AI systems. Though not legally binding, the NIST framework is considered a de facto baseline for enterprise organizations. Many tech providers incorporate NIST standards into their internal compliance policies.
3. Big Tech’s Built-In Controls
Major cloud platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud have established governance controls that restrict the use of their AI tools. For example, Microsoft requires users of its Azure OpenAI Service to comply with its Responsible AI Standard, which bans uses like automated surveillance, disinformation campaigns, or biometric identification without consent. Often, corporate policies operate more efficiently than government oversight thanks to the platforms’ direct control over infrastructure.
The bigger problem: AI literacy
More telling than the vote itself was the celebration that followed. In framing the moratorium as a dire threat to AI safety and responsibility, lawmakers revealed a clear misunderstanding of how AI systems are actually developed and deployed. The Senate’s enthusiasm for lifting the ban suggests a well-meaning but shallow grasp of how the tech actually works.
The real challenge isn’t that AI is under-regulated—it’s remaining compliant. In 2024, nearly 700 AI bills were introduced across 45 states, 31 were enacted, and 59 federal AI regulations were issued (NCSL), and that number is expected to increase in 2025. This doesn’t include the aforementioned domestic frameworks that cover AI inadvertently, but international standards like the EU AI Act, OECD AI Principles, and Canada’s Artificial Intelligence and Data Act (AIDA).
If anything, we’re moving into the territory of regulatory fatigue. Policymakers are treating AI as a completely new category, separate from existing frameworks. This is not to say we shouldn’t be creating and improving AI-specific laws. But focusing on how to update existing rules to account for new AI capabilities might be a better use of time. Treating it as the former encourages reactive, piecemeal legislation that may not be as effective over time.
And it’s not just one-sided. Leaders across the political spectrum routinely approach AI in abstract, often alarmist terms, without a clear understanding of safeguards already in place. The result is a flood of symbolic legislation, like the now-defunct moratorium, that does little to improve safety, fairness, or accountability, but gives the perception that it does.
What happens next
Rather than focusing on whether states can or cannot enforce their own AI rules, lawmakers should be asking tougher questions: Are current privacy laws sufficient to handle the kinds of data AI systems rely on? Do existing protections extend to algorithmic decision-making? How can we ensure that AI doesn’t exacerbate existing inequities in clinical care, housing, employment, or education?
Those are complex questions, and they don’t require more legislation for the sake of it, but smarter integration of AI into the frameworks we already have. That demands a baseline level of technical understanding and acknowledgment that AI is less uncharted territory than it is a new toolset within familiar domains.
My company, Pacific AI, is dedicated to helping companies remain compliant with fast-changing AI laws and regulations. My business depends on other businesses prioritizing AI governance. But even as a strong advocate of using AI for good, I know that most AI misuse isn’t happening in a regulatory vacuum. It’s usually happening when existing laws are ignored, misapplied, or under-enforced.
More rules won’t help if we don’t enforce the ones we already have. And Senate votes won’t make a difference unless they’re backed by a meaningful commitment to both understand and enforce AI rules that lead to safe, effective, and responsible AI deployments at scale.
The Senate’s decision to eliminate the decade AI regulation moratorium is not necessarily a bad thing, but it won’t reshape the AI landscape in any tangible way. What it really reveals is the gap between tech innovation and the depth of policymaker understanding. If we want meaningful progress, we don’t need more symbolic votes. We need a smarter, more informed conversation about AI and legislation that evolves as fast as the technology it aims to govern.