and EnergyCenter for TechnologyFeaturedFirst AmendmentFree SpeechOnline Speech ProjectScienceSupreme CourtTechnologyTechnology and innovationTechnology Policy

Letting Tech Companies Assert Their Users’ Online First Amendment Rights Is Vital

It’s increasingly clear how important standing is for asserting online First Amendment speech rights. Last year, the US Supreme Court in Murthy v. Missouri dismissed on standing grounds the claims of two states and five individuals alleging that multiple Biden administration officials unlawfully pressured and coerced social media platforms “to suppress protected speech in violation of the First Amendment.”

The Murthy plaintiffs sought an order barring the officials from engaging in such jawboning in the future. The Court, however, dodged their claim. The majority concluded the plaintiffs lacked standing for such relief because they couldn’t “demonstrate a substantial risk that, in the near future, they [would] suffer an injury that is traceable to a government defendant and redressable by the injunction they seek.” As I explained, “Without standing, the justices didn’t possess jurisdiction to evaluate the merits of the plaintiffs’ claims that unduly coercive government communications are what caused Facebook, X (formerly Twitter), and YouTube to censor posts that conflicted with the Biden administration’s views.” 

typing on computer
Via AP Images

Two ongoing cases I recently addressed that also involve online First Amendment speech rights––Garcia v. Character Technologies and Computer & Communications Industry Association v. Uthmeier––highlight a facet of standing that’s essential for reaping the benefits of innovative artificial intelligence tools and popular online platforms. Specifically, judges in both cases wisely concluded that technology entities have standing to assert the First Amendment rights of users to receive and engage in speech.

Garcia involves a mother’s claim that her 14-year-old son killed himself because of speech produced by a chatbot character. Uthmeier pivots on a First Amendment challenge by two trade associations––Computer & Communications Industry Association (CCIA) and NetChoice––to a Florida statute banning anyone under age 14 from holding accounts with social media platforms that deploy a supposedly “addictive” feature.

These cases have in common the unenumerated First Amendment right of technology users to receive speech. The Supreme Court long ago concluded that “[t]he right of freedom of speech and press includes not only the right to utter or to print, but the right to distribute, the right to receive, [and] the right to read.” Here’s how this critical right arises in Garcia and Uthmeier even though chatbot users and minors aren’t parties.

In a motion-to-dismiss order filed in May in Garcia, Senior US District Judge Anne Conway concluded that the defendants––Character Technologies, Google, Alphabet, and two individuals––could assert the First Amendment rights of users of role-playing chatbots animated by a large language model (LLM) application called Character.AI. Character Technologies describes Character.AI as “a new product powered by our own deep learning models, including [LLMs], built and trained from the ground up with conversation in mind.” It “empowers people to connect, learn, and tell stories through interactive entertainment. Millions of people visit Character.AI every month, using our technology to supercharge their imaginations.”

It’s vital for the Garcia defendants to have standing to assert the First Amendment rights of chatbot users because chatbot speech is generated by an LLM, not a human. That’s a significant constitutional problem for the defendants because, as Conway wrote, “a chatbot . . . is not a ‘person’ and is therefore not protected by the Bill of Rights.”

The First Amendment-based defenses asserted by the Garcia defendants thus hinge on third-party speech rights: The rights of the “millions of people [that] visit Character.AI every month” who will otherwise be deprived of its benefits if the court grants the relief the plaintiff seeks. Character Technologies argues that “the First Amendment protects the rights of listeners to receive speech regardless of its source.” It contends that the order the plaintiff wants “would materially limit the nature and volume of speech available to [Character.AI’s] millions of users, including by making characters not appear in any way to be real people or tell stories.”

The problem still facing the Garcia defendants, however, is that Conway is currently “not prepared to hold that [chatbot] output is speech.” In short, the defendants can assert the First Amendment rights of chatbot users as the case continues, but they now must convince Conway “speech” is at issue.

In Uthmeier, Chief US District Judge Mark Walker concluded that CCIA and NetChoice can assert the First Amendment speech rights not only of their members, but also of “their members’ users.” He wrote that “enforcement of Florida’s law prohibiting certain social media platforms from allowing a subset of youth to create accounts directly burdens those youths’ rights to engage in and access speech.” The plaintiffs’ ability to have third-party standing to assert the constitutional rights of certain platform users––minors––was key in Walker’s decision preliminarily blocking enforcement of facets of Florida’s law.

In sum, the ability of tech entities to assert the First Amendment rights of users of their online technologies facilitates expressive freedom.

The post Letting Tech Companies Assert Their Users’ Online First Amendment Rights Is Vital appeared first on American Enterprise Institute – AEI.

Source link

Related Posts

1 of 16