The centerpiece of the so-called “One Big Beautiful Bill” in tech policy circles was the “AI moratorium,” a temporary federal limit on state regulation of artificial intelligence. The loss of the AI moratorium, stripped from the bill in the Senate, elicited howls of derision from AI-focused policy experts such as the indefatigable Adam Thierer. But the moratorium debate may have distracted from an important principle: Regulation should be technology neutral. The concept of AI regulation is essentially broken, and neither states nor Congress should regulate AI as such.
Nothing is straightforward. The AI moratorium was not a moratorium at all. Contorted to fit into a budget reconciliation bill, it meant to disincentivize regulation by withholding federal money for 10 years from states that are “limiting, restricting, or otherwise regulating artificial intelligence models.”

It is economically unwise for states to regulate products and services offered nationally or globally. When they do so unevenly, a thicket of regulations and lost innovation is likely. Compliance costs rise disproportionately relative to the benefits of protections that more efficient laws could achieve.
But I’m ordinarily a stout defender of the decentralized system created by our Constitution. I believe it is politically unwise to move power to remote levels of government. With Geoff Manne, I’ve written about avoiding burdensome state regulation through contracts rather than preemption of state law. So before the House AI Task Force’s meeting to consider federalism and preemption, I was in the “mushy middle.”
With the moratorium gone, federal AI regulation would justify preempting the states, giving us efficient regulation, right? Nothing is straightforward.
Nobody—including at the federal level—actually knows what they are trying to regulate. Take a look at the definition of AI in the Colorado legislation, famously signed yet lamented by tech-savvy governor Jared Polis. In Colorado, “Artificial Intelligence System” means
any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.
Try excluding an ordinary light switch from the definition. Must you wrestle with semantics? I’m struck by the meaningless dualities. Take “explicit or implicit objective.” Is there a third category? Or are these words meant to conjure some unidentified actor’s intent? See also “physical or virtual environments.” (Do you want to change all four tires? No, just the front two and back two.) Someone thought extra words would add meaning, but they actually confess its absence.
Defining AI is fraught because “artificial intelligence” is a marketing term, not a technology. For policymaking purposes, it’s an “anti-concept.” When “AI” took flight in the media, countless tech companies put it on their websites and in their sales pitches. That doesn’t mean that AI is an identifiable, regulatable thing.
So pieces of legislation like those in Colorado, New York, and Texas use word salads to regulate anything that amounts to computer-aided decision-making. Doing so will absorb countless hours as technologists and businesspeople consult with lawyers to parse statutes rather than building better products. And just think of the costs and complexities—and the abuses—when these laws turn out to regulate all decision-making that involves computers.
Technologies and marketing terms change rapidly. Human interests don’t. That’s why technology-neutral regulation is the best form—regulation that punishes bad outcomes no matter the means. Even before this age of activist legislatures, the law already barred killing people, whether with a hammer, an automobile, an automated threshing machine, or some machine that runs “AI.”
The Colorado legislation is a gaudy, complex, technology-specific effort to prevent wrongful discrimination. That is better done by barring discrimination as such, a complex problem even without the AI overlay. New York’s legislation is meant to help ensure that AI doesn’t kill people—a tiny but grossly hyped possibility. Delaying the adoption of AI through regulations like New York’s will probably kill more people (statistically, by denying life-extending innovations) than the regulations save.
Texas—well, who knows what the Texas bill is trying to do.
The demise of the AI moratorium will incline some to think that federal AI regulation is the path forward because it may preempt unwise state regulation. But federal regulation would not be any better. It would be worse in an important respect—slower and less likely to change with experience.
The principle of technology-neutral regulation suggests that there should not be any AI regulation at all. Rather, the law should address wrongs as wrongs no matter what instruments or technologies have a role in causing them.
The post As Congress Releases the AI Regulatory Hounds, A Reminder appeared first on American Enterprise Institute – AEI.