from the the-first-amendment-still-matters dept
Laws prohibiting minors from accessing AI-powered chatbots like ChatGPT would violate the First Amendment. But that’s not stopping lawmakers from trying.
Senator Josh Hawley has introduced the Guidelines for User Age-verification and Responsible Dialogue Act of 2025 (GUARD Act), which would require AI companies to “prohibit” minors under “18 years of age” from “accessing or using” AI chatbots that “produce[] new expressive content” in response to “open-ended natural-language or multimodal user input.” Earlier this year, Virginia and Oklahoma introduced similar bills, as did California last September. The crux is the same: to prohibit minors from accessing chatbots capable of producing human-like speech.
If passed, these bills will get struck down in court for violating the First Amendment, which prohibits laws “abridging the freedom of speech.” Specifically, minors have a First Amendment right to receive information. The Supreme Court has explained, “minors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.” This right applies to the Internet with full force.
When analyzing these laws under the First Amendment, a court would start by asking whether the government is regulating speech. Speech is a broad concept, including written and spoken words, photos, music, and other forms of expression like computer code and video games. Chatbot outputs are speech; they comprise all these forms of expression. Laws prohibiting minors from accessing chatbots regulate speech by cutting off young users from the ideas and information communicated in outputs.
Next, a court would assess whether minor chatbot bans regulate protected or unprotected speech. The vast majority of outputs are protected speech: Teens use chatbots to search for information, get help with schoolwork, for fun or entertainment, and to get news. Here, the only relevant category of unprotected speech is content that is obscene to minors. The GUARD Act, for example, states that “chatbots can generate and disseminate harmful or sexually explicit content to children,” and the Virginia bill would block chatbots “capable of … [e]ngaging in erotic or sexually explicit interactions with the minor user.” Sexually explicit outputs to minors are likely unprotected speech, but the bills go much further by blocking all youth access to chatbots.
Because these bills regulate a mix of protected and unprotected speech, the court would then assess whether the prohibition on teen usage is content-based or content-neutral. Content-based restrictions target speech based on its viewpoint, subject matter, topic, or substantive message. On the other hand, content-neutral laws regulate nonsubstantive aspects of speech, like its time, place, or manner.
These bills are content-based because they prohibit access based on the subject matter of chatbot outputs. The GUARD Act would prohibit minors from accessing chatbots capable of “interpersonal or emotional interaction, friendship, companionship, or therapeutic communication.” The Oklahoma bill would block chatbots that “express[] or invit[e] emotional attachment” or “form ongoing social or emotional bonds with users, whether or not such systems also provide information.” Similarly, the Virginia bill would ban minors from accessing chatbots “capable of … offering mental health therapy.” Regardless of the pros and cons of minors accessing such information, the prohibitions are based on the content of the outputs — not on merely nonsubstantive aspects of the speech.
Because these bills are content-based, the court would apply strict scrutiny. The government would have to prove the bills are narrowly tailored to advance a compelling governmental interest and that they are the least restrictive means of serving that interest. Banning minors from accessing chatbots arguably advances “a compelling interest in protecting the physical and psychological well-being of minors” by “shielding minors from the influence of” obscene outputs.
Strict scrutiny, however, requires lawmakers to use a less restrictive means than bans to protect minors. Lawmakers could, for example, require AI companies to provide parental controls or strict safeguards preventing their models from engaging in sexually explicit conversations with young users. In fact, AI companies already have policies and features to protect minor users. Because these bills aren’t narrowly tailored, a court would strike them down for violating the First Amendment.
Banning minors from using chatbots is also bad policy. Last October, California Governor Gavin Newsom vetoed the state’s proposed ban, stating, “AI is already shaping the world, and it is imperative that adolescents learn how to safely interact with AI systems … We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether.”
Most U.S. teens use AI chatbots. These young users have a First Amendment right to receive the information the AIs output, which is generally protected speech. Prohibiting access to chatbots would violate minors’ constitutional rights and deprive them of the vast benefits of AI.
Andy Jung is associate counsel at TechFreedom, a nonprofit, nonpartisan think tank focused on technology law and policy.
Filed Under: 1st amendment, ai, chatbots, josh hawley, kids, strict scrutiny












