The Real AI Revolution Is Happening in the States

The Hollywood Deepfake Nightmare

Late one night in Los Angeles, a young actress discovered a video of herself that she never made. Her face had been deepfaked onto explicit footage, a malicious fabrication that spread across social media. She was horrified and helpless. There was no federal law explicitly banning this violation of privacy. But in Sacramento, lawmakers took notice. In 2019 California passed bill AB-602, which criminalized non-consensual deepfake pornography.

The actress’s ordeal, though harrowing, became a catalyst for change, not on Capitol Hill, but at the state level. Her story is just one example of how states are stepping up to confront the real-world harms of artificial intelligence (AI).

The bottom-line up front

States are now better positioned than the federal government to lead AI policy. In 2024 alone, 45 states introduced 693 AI bills, with 113 becoming law. California SB53 was signed into law this week, with New Yyork’s RAISE Act hot on it’s heals. This state-level surge is a “technology federalism” revolution.  The result is a patchwork of innovative state laws that address AI’s challenges faster and more pragmatically than any nationwide plan.

The Big Question

In the summer of this year, a battle on Capitol Hill was underway over the passage of the One Big Beautiful Bill Act. Part of the original language that passed the US House was a 10-year moratorium on any regulation of AI at the state level. The measure failed by a 99-1 vote in the Senate. This shows that while the appetite for overarching federal AI legislation is nearly zero, the need for added AI regulation is clear to all.

The big question is how we balance the speed of AI innovation with the necessity of making AI models and systems safe for users and transparent for business operations.“The mismatch between evaluation and real-world use is becoming increasingly consequential as interactive AI systems proliferate in homes, schools, and workplaces.”

Laboratories of Democracy in Action

This is where states come in.

Justice Louis Brandeis famously praised the states as “laboratories of democracy,” where a single courageous state can try novel social and economic experiments without risk to the rest of the country.

That concept is on vivid display in the realm of AI policy. The diversity of state efforts is exactly what Brandeis envisioned: many experiments, one union. Consider just a few examples of how major states are innovating:

California and The SB53 Breakthrough

On September 29, 2025, California made history. Governor Gavin Newsom signed SB53, the Transparency in Frontier Artificial Intelligence Act, making California the first state to regulate frontier AI model safety.

Here’s what makes SB53 a blueprint for the nation:

The law requires large frontier developers to publicly publish safety frameworks describing how they incorporate national standards, international standards, and industry best practices, while creating mechanisms for companies and the public to report critical safety incidents to California’s Office of Emergency Services. Companies have 15 days to report critical incidents, or 24 hours if imminent harm exists.

But the real story? California learned by iterating. Just one year earlier, Newsom vetoed SB1047, a more stringent bill requiring kill switches and rigid testing regimes, warning it could burden California’s AI industry. Instead of giving up, Newsom convened the Joint California Policy Working Group on AI Frontier Models, a team of world-leading AI academics who released recommendations on evidence-based policymaking.

Senator Scott Wiener went back to the drawing board. SB53 trades prescriptive engineering mandates for standardized disclosure and accountability, focusing narrowly on “large frontier developers” with over $500 million in annual revenue training models at 10^26 FLOPs or higher, sparing smaller companies from immediate obligations.

The innovation catalysts embedded in SB53

Cal Compute Consortium: SB53 establishes Cal Compute within the Government Operations Agency to develop a public computing cluster, advancing safe, ethical, equitable AI by fostering research and innovation with free and low-cost compute access for startups and academic researchers.

Whistleblower Protection: The bill protects employees who disclose significant health and safety risks posed by frontier models, with civil penalties for noncompliance enforced by the Attorney General’s office. Given recent safety researcher exits from major AI labs, this could empower insiders to speak up.

Real-World Incident Reporting: SB53 contains a world-first requirement that companies disclose safety incidents involving dangerous deceptive behavior by autonomous AI systems.

Newsom framed it simply, “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance”

Why this matters beyond California

With 32 of the world’s top 50 AI companies based in California, SB53 will have worldwide ramifications. Gov. Newsom stated California’s status as a global technology leader provides a unique opportunity to create a blueprint for well-balanced AI policies beyond its borders, especially in the absence of comprehensive federal AI policy which many think would support startups further.

Even AI companies are acknowledging the shift. Anthropic co-founder Jack Clark stated that Governor Newsom’s signature establishes meaningful transparency requirements without imposing prescriptive technical mandates. OpenAI spokesperson Jamie Radice noted they’re pleased California created a critical path toward harmonization with federal government approaches to AI safety.

New York and The RAISE Act

The Responsible AI Safety and Education (RAISE) Act passed New York’s legislature with 84% public support in July of 2025. It targets only the most powerful AI systems (the ones costing $100M+ to build) and mandates that large AI companies publish a safety plan to prevent “widespread harm and destruction.”. Tech companies must provide evidence that they test for safety before deployment, report serious incidents, and be transparent about protections.

Why this bill matters

Last year saw 233 documented AI incidents. New York City’s own $600K chatbot told businesses to break the law. A finance worker lost $25M to deepfake fraud. These aren’t hypothetical risks.

The bill isn’t perfect, but it’s narrow, targeted, and addresses catastrophic risks without crushing innovation. Nobel laureates back it. Over 100 AI researchers globally support it.

Texas and The TRAIGA Blueprint

Everything is bigger in Texas, including its ambition to lead on AI. In June 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, charting a comprehensive framework for AI within the state.

TRAIGA requires companies to disclose AI usage, establishes guidelines to prevent algorithmic bias, and even creates an “AI Sandbox” allowing experimental AI projects temporary regulatory exemptions .

Texas’s law uniquely pairs pro-innovation elements (to keep tech business booming in Austin) with guardrails protecting consumers. Notably, Texas has also empowered enforcement: just weeks after TRAIGA’s passage, the Texas Attorney General launched a sweeping investigation into AI firms’ data practices under new state privacy laws. The Lone Star State is showing that red states can be as proactive on AI as blue states, emphasizing transparency and personal liberty alongside innovation.

TRAIGA will go into effect on Jan. 1st, 2026.

Additional State Legislation Worth Noting

  • Virginia: Enacted legislation focusing on accountability in AI development and deployment.
  • Illinois: Passed HB 3563 in 2023, seeking to ensure that the design, development and use of AI is informed by collaborative dialogue with a variety of stakeholders.
  • Maryland: Adopted policies and procedures concerning state government development, procurement, deployment, use, and assessment of AI systems.
  • Vermont: Enacted legislation promoting interdisciplinary collaboration and protecting individuals from unsafe or ineffective AI systems.
  • Delaware: Passed legislation to create a regulatory sandbox framework for the testing of innovative and novel technologies that utilize agentic AI through its AI Commission.
  • Colorado: Passed the nation’s first comprehensive law against AI discrimination in 2024.

Together, these examples show how policy diversity is a strength. Each state is tackling AI from a unique angle. Whether it’s New York zeroing in on doomsday scenarios, Texas championing transparency and innovation, California safeguarding personal rights, or Colorado ensuring fairness. This mosaic of approaches means the United States is, in effect, conducting dozens of policy experiments at once.

State-level action better reflects the American fabric

It is easy to imagine that the passage of a large landmark law on data, privacy, and/or AI will help with many of the issues that we see with these technologies. The frustrating part is that any monolithic legislation at the federal level is bound to omit critical nuances…and when what you are legislating involves both humans and AI, nuance is the name of the game.

Our friends across the Atlantic passed the EU AI Act, their monolithic AI regulation for Union members on August 1, 2024. This law comes after the large and frankly globally impactful GDPR. These laws are broad and innovative, but they are far from perfect, and they do not pick up every nuance of the use of AI by humans. Too often we think of technology regulation as regulating a piece of technology, but what we are really regulating is the interaction point between humans and technology. That’s a messy place to be, especially from a legal standpoint, which is why the state approach provides a better model.

States know their demographics and populations better than federal legislators, giving them the ability to create finer, more nuanced legal approaches that cover the specifics of their populations from values to harms. This makes state legislatures a perfect incubator for different approaches that may in the future be adopted as federal statute. For now, the appetite for a broad, GDPR-style piece of legislation in the US hovers right around zero. That may be ok ultimately because this iterative process, while slow and sometime frustrating, may prove the best way to build human-centered AI regulation.

Federal legislators still have a role, particularly in outlawing universally abhorrent AI activities such as non-consensual deepfake pornography. AI regulation is hard and it requires a depth of knowledge that many legislators and policy makers do not have. Instead, the states can look at the problem from the human perspective and add a richness to AI legislation that goes beyond what we see in a lab.

Bridging the Lab-Human Gap

Underpinning all these advantages is a core insight: AI often fails when it meets real people. Time and again, we’ve seen AI systems that perform flawlessly in controlled lab conditions wreak havoc in the wild. The federal government has been slow to address this lab-to-reality gap. States, conversely, are targeting it head-on.

Whether it’s Colorado demanding algorithms prove themselves fair in practice, or New York insisting on safety plans for worst-case scenarios, state leaders are effectively forcing AI out of the theoretical realm and into human-centric stress tests. They are asking the questions that matter on Main Street: Does this algorithm actually work for all of our people? What happens when an elderly user, or a non-standard accent, or an unexpected behavior comes along?

The urgency of these questions cannot be overstated. Research confirms that four out of five AI projects flop when deployed in the real world, largely because developers didn’t account for the messy, unpredictable nature of human behavior . These failures carry enormous costs – not just in money, but in human lives and societal trust. A misdiagnosis by an AI medical tool, a discriminatory lending algorithm, a driverless car that doesn’t recognize a pedestrian in time: each of these has happened, and each has caused real harm . As Minnesota’s Sen. Gounardes warned, letting powerful AI loose without safeguards is as reckless as letting a child ride in a car without a seatbelt . The human stakes are simply too high.

States, by virtue of their structure, are better able to mitigate those stakes. They can require more transparency, testing, and human oversight for AI within their jurisdictions. They can outlaw the especially dangerous use cases and impose auditing on high-risk systems. And they can do it now, while Congress debates. Indeed, federal inaction or gridlock is often cited by state lawmakers as a reason they’ve moved forward. “Faced with mounting evidence of AI harm, states are no longer waiting for federal action,” one report noted . Instead, states have filled the vacuum with practical rules to ensure AI meets society’s norms, not the other way around.

When Trickle Up Policy Works

Over time, the best ideas can migrate and combine, while the bad ideas are left on the cutting-room floor. In a fast-moving domain like AI, this decentralized innovation may be not just beneficial but essential.

Protections from identity and image misuse like in our actress example have now been extended to all Americans through the TAKE IT DOWN Act . This significant piece of legislation combats the harmful practice of nonconsensual online image publication. While it offers important protections for victims, it also raises complex legal and ethical considerations regarding free speech and platform responsibility.

In 2025, the momentum is at the state level. Washington, D.C. is watching and learning as the states craft the first draft of America’s AI rulebook. In the spirit of federalism, perhaps that’s how it should be. States will continue to experiment, iterate, and adapt, shining light on what works and what doesn’t. The gap between lab-tested AI and real human behavior will start to close as these policies force AI to prove itself in the real world.

Ultimately, a cohesive national AI strategy would benefit everyone. Businesses wouldn’t have 50 standards to juggle, and baseline protections could be guaranteed for all Americans.

Conclusion

For now, the smartest bet is on the states. They have the iterative speed, the policy diversity, and the close-to-home insight to govern AI in a human-centric way. Every new state law is a reminder that AI’s impacts are ultimately local, felt by someone’s neighbor, child, or employee. Solutions often start locally too. AI will undoubtedly transform society, but how it does so will be shaped by those willing to roll up their sleeves and engage with its effects on real people. Increasingly, those people are state legislators, governors, and activists working at the state and city level.

Related Articles