When Texas Gov. Greg Abbott signed the App Store Accountability Act in late May, the second-most populous state in the union doubled down on age verification as the way to limit minors’ unfettered access to harmful online content, even as a constitutional challenge to a similar Texas law by the online porn industry awaits a ruling from the U.S. Supreme Court.
Texas’ new law, which would take effect on January 1, 2026, would require mobile phone app stores, the largest of which are run by Apple and Google, to verify users’ ages via a “commercially available method,” likely by requiring a photo ID or a “biometric selfie” for those without IDs. Users will then be assigned to one of four statutory age categories: “child” (under 13), “younger teenager” (13-15), “older teenager” (16-17), or “adult.” All minor accounts must be linked to a “verified” parent or guardian account, with the onus on the app store for verification. Parental consent must then be obtained for each download or purchase, and the app store must notify developers if consent is revoked.
Verification within the app store does not let individual app developers off the hook. They will be required to assign an age rating to each app and each in-app purchase, based on the statutory age categories, and must provide both the rating and the reasons for it to the app stores.
Texas is the second state to pass such a law, after Utah, while several more states have introduced similar bills, as have federal lawmakers.
Age limit requirements for social media apps have wide and bipartisan consensus in the U.S., as a growing body of research confirms what many parents see every day: addictive social media apps that keep kids scrolling, algorithmically serving up content they’re not even seeking, which can lead to everything from increased anxiety, depression, and sleep disturbances to cyberbullying, sexual exploitation, and suicide.
Growing momentum for protections.
Advocates say social media companies have knowingly failed to safeguard children’s mental health, placing profits above safety, and must now be forced into accountability via new regulations. Momentum is growing. Age verification laws are just one of the current legislative efforts underway to reduce minors’ access to harmful online content. Many states are banning cell phones during school hours. Some, including Texas, have proposed banning social media for those under age 16, as Australia has done. (Texas’ bill to do so, House Bill 186, missed a key deadline and failed, but the state did just pass a law banning cell phones in schools.)
At the federal level, a bipartisan group of lawmakers recently reintroduced the Kids Online Safety Act, or KOSA, which would establish a “duty of care” for social media and gaming companies, meaning they would have to take reasonable steps to prevent and mitigate certain harms they know their platforms and products are causing to young users.
Despite intense opposition from Big Tech companies, KOSA appeared to be on a glide path to becoming law when it was first introduced in 2024, sailing through the Senate last summer after grieving families held up photos of their dead children, blaming their loss on harms from online activity, as five Big Tech CEOs testified during a judiciary committee hearing.
But privacy concerns (and at least $51 million in lobbying efforts by social media companies) kept House Speaker Mike Johnson from bringing the bill to a floor vote, effectively killing it. The latest version contains language the bill’s sponsors say “make clear that KOSA would not censor, limit, or remove any content from the internet, and it does not give the FTC [Federal Trade Commission] or state Attorneys General the power to bring lawsuits over content or speech.”
But what about adult users?
As those efforts gather steam, age verification laws have raised significant privacy and First Amendment concerns.
Opponents argue that age verification requirements restrict adults’ access to legal online content in at least two ways: Those who show ID to pass through the age gate are effectively robbed of their anonymity, while those who lack government identification or whose age is misidentified may be barred from accessing certain websites or apps altogether.
Not surprisingly, Google and Apple, which will bear the costs of setting up ID verification systems and securely storing all this new data, have lobbied forcefully against app store age verification; Apple CEO Tim Cook personally called Abbott urging him to veto the Texas bill. The companies that argue that age verification should happen with each individual app. Social media companies, including Meta, the owner of Instagram and Facebook, and X, formerly Twitter, support age-gating at the app store level, calling it a “one stop” solution for parents.
Kathleen Farley, vice president of litigation for the Chamber of Progress, a group backed by Apple and Google’s parent company Alphabet, said the Texas law is likely to face First Amendment challenges. “A big path for challenge is that it burdens adult speech in attempting to regulate children’s speech,” Farley told Reuters last month. “I would say there are arguments that this is a content-based regulation singling out digital communication.”
Texas’ new law is not its first age-verification effort. It passed similar legislation in 2023, targeting adult entertainment websites. While some sites did institute age verification, others, including PornHub, simply blocked Texans (and residents of other states with similar laws) from accessing their sites. A trade association for the adult industry known as the Free Speech Coalition then filed suit against the state.
A federal judge in Austin barred the law from going into effect, concluding that it was likely unconstitutional, but that ruling was quickly overturned by the 5th U.S. Circuit Court of Appeals, allowing Texas to implement the law. The case made its way to the U.S. Supreme Court, which heard oral arguments in January and is expected to hand down a decision as early as this month. At issue is which level of judicial review should be used to evaluate the law.
What are the arguments?
The ACLU, which is representing the plaintiffs, argues that “the Supreme Court has long recognized that the government’s regulation of sexual expression must satisfy strict scrutiny if it imposes burdens on adults’ access to protected speech, even if the law was meant to protect children.”
The 5th Circuit, in allowing Texas’ law to be implemented, used a less rigorous standard, known as rational-basis review, which looks at whether the law advances a legitimate state interest—in this case, protecting minors from harmful online content—and, if so, whether there is a rational connection between that interest and the law.
The case represents a critical test of whether 2004’s Ashcroft v. ACLU precedent—which rejected mandatory age verification as too restrictive—remains viable in today’s internet landscape. It was unclear how the Supreme Court might rule in Free Speech Coalition v. Paxton, SCOTUSblog reported after oral arguments: “Some justices seemed to agree with the challengers, led by a trade group for the adult entertainment industry, that a federal appeals court in New Orleans should have applied a more stringent test to determine whether the law violates the First Amendment.” But Chief Justice John Roberts and Justice Clarence Thomas appeared to suggest that advances in technology might justify taking another look at the standard of review. Access to pornography, Roberts observed, has “exploded,” and several justices noted that content-filtering software has not been an effective solution.
Does a technological landscape in which harmful content is ever-easier to access justify a more deferential standard of review? As the justices grapple with that question, their decision could determine whether and how governments can regulate online speech in the name of protecting children. What the court decides may either offer a way forward for more states to regulate online content or stymie such efforts, despite their growing popularity.