and EnergyArtificial intelligenceCenter for TechnologyFeaturedRegulationScienceTechnologyTechnology & InnovationTechnology and innovation

The Risks of AI-Specific Liability Regimes

At Politico’s AI and Tech Summit earlier this week, Senator Ted Cruz (R-TX) renewed his call for a federal moratorium on state AI laws, citing the growing patchwork of local regulation. Colorado and New York City have already adopted AI-specific liability regimes, and similar proposals are pending in legislatures nationwide. These measures rest on a flawed premise: that AI requires technology-specific liability rules distinct from the general legal principles that govern harmful conduct throughout society. In practice, singling out AI use for heightened penalties will deter adoption, favor well-resourced entities over start-ups and nonprofits, and undermine the promise of the AI revolution.

Over the past two years, states have raced to impose liability for specific harms generated by AI use. Colorado was the first out of the gate: The Colorado AI Act places a duty to avoid algorithmic discrimination on entities that use AI systems to aid with making “high risk” decisions, such as education, employment, lending, healthcare, housing, insurance, and legal services. The Act contains several onerous requirements that such an entity must take to mitigate the risk of bias, but even then, compliance creates only a rebuttable presumption of reasonable care. New York City prohibits the use of AI tools to aid hiring decisions unless the employer undertakes annual bias audits. These, and many similar bills pending in several state legislatures, are inspired by the European Union’s AI Act, a comprehensive legal regime to prescribe conditions on the use of AI decision-making in society.

Via Adobe Stock.

But this European regulatory model is flawed: It wrongly treats the tool as uniquely suspect, rather than focusing on the harm caused. Our legal system already has well-established principles for addressing discrimination, unfair trade practices, and other harms, regardless of source. A job candidate denied a position because of his race suffers the same harm whether the denial stemmed from a biased supervisor, a faulty spreadsheet, or a poorly designed algorithm.

Creating specific liability rules for AI-aided decisions creates several unintended consequences. Perhaps most obviously, it discourages AI adoption. Litigation risk makes companies hesitant to deploy AI technology, even when these tools could improve fairness or efficiency. Ironically, laws like Colorado’s that target “high risk” decisions ultimately deter AI deployment where it could be most beneficial to society. Properly calibrated, AI systems can reduce discrimination in high-risk sectors such as health care, employment, and lending, by reducing the impact that human decisionmakers’ implicit biases could otherwise have on the process. When they produce problematic results, AI systems can be tested, audited, and improved in ways people generally cannot. Over-regulating AI use thus risks locking in flawed human systems rather than encouraging better alternatives.

Second, regimes like Colorado’s will tilt markets in favor of large, well-resourced players. Big tech companies and major corporate clients can absorb the model design, testing, auditing, and other compliance costs demanded by these legal regimes as a condition of AI deployment. They can also afford larger legal teams and insurance costs that help alleviate litigation risks. Startups, nonprofits, and smaller firms generally cannot afford these costs. Thus, this approach will concentrate AI development and use in the hands of a few dominant companies, limiting innovation and competition in both the AI developer market and in downstream markets for AI deployment.

Fortunately, some states have offered a better response to concerns about AI decision-making. Utah’s AI Law requires entities that use generative AI in consumer interactions to disclose this fact, and explicitly provides that AI use is not a defense to violations of the state’s consumer protection laws. Similarly, the Massachusetts Attorney General recently issued an advisory explaining how state consumer protection, anti-discrimination, and data security laws apply to AI-assisted operations. These approaches strike the right balance, providing redress for harm without penalizing AI adoption. Defendants cannot hide behind an argument that “the AI did it,” but neither do they face unique litigation risk merely for using AI rather than analog tools.

Regulators are right to be concerned about the risks associated with AI deployment. Bias, hallucinations, and other failures can cause serious harm if left unchecked. But the answer is not to build a separate body of AI-specific liability law that treats the technology itself as inherently suspect. The better path is to apply existing legal principles consistently—holding organizations accountable for discrimination, fraud, or negligence regardless of whether the source is human or machine. If AI adoption reveals that a class of decisions are not adequately protected by law, the fault is with the law; the AI tool is only the messenger. Fear-driven AI-specific liability rules will not only blunt AI’s benefits, but also tilt those benefits toward society’s largest and wealthiest players.

The post The Risks of AI-Specific Liability Regimes appeared first on American Enterprise Institute – AEI.

Source link

Related Posts

1 of 5