A controversial provision within a sweeping legislative package, dubbed “One Big Beautiful Bill,” is drawing intense criticism for its potential to impose a 10-year moratorium on state-level artificial intelligence regulations. Critics warn that if enacted, this federal preemption would leave the burgeoning AI landscape largely unchecked, opening the door to significant societal harms without adequate recourse for affected citizens.
The provision, which has cleared the U.S. House of Representatives and is now under consideration in the Senate, broadly prohibits states from enforcing any law or regulation limiting or restricting AI models, systems, or automated decision-making processes for a decade. This move, primarily backed by those who argue it will foster innovation and maintain U.S. competitiveness in AI, has ignited a fierce debate about the balance between technological advancement and public safety.
Opponents, including numerous state lawmakers, civil society groups, and legal experts, argue that a decade-long regulatory vacuum is a dangerous proposition. Examples of AI’s potential for harm are already emerging, from discriminatory algorithmic biases affecting housing and employment to the proliferation of non-consensual deepfake pornography and concerns over AI’s impact on child safety and mental health. A 14-year-old in Florida reportedly died by suicide after interacting with a generative AI bot, underscoring the immediate need for protective measures.
States have been at the forefront of AI regulation, enacting hundreds of bills addressing various concerns, including health use, government applications, criminal misuse, and electoral integrity. Many of these pending and enacted laws aim to criminalize AI-generated child pornography or require disclosures for AI-generated content in political ads. The “One Big Beautiful Bill” would invalidate these efforts, stripping states of their ability to respond to emergent harms and leaving citizens vulnerable.
Proponents of the federal moratorium, such as Senator Ted Cruz, contend that a “50-state patchwork” of regulations could stifle AI development and hinder the U.S. in its race against global competitors like China. However, a growing chorus of dissenting voices, including policy researchers, argue that safety and innovation are not mutually exclusive. They assert that designing AI with safety in mind can lead to greater trust and long-term adoption.
A recent poll revealed that a significant majority of voters – 73% – support AI regulation by both state and federal governments, with 59% opposing a 10-year moratorium on state AI regulation. As the bill progresses through the Senate, the pressure to strike a balance between innovation and protection will undoubtedly intensify, with the well-being of the public hanging in the balance.