On September 16, 2025, Matthew Raine testified to the Senate about how ChatGPT acted as his 16-year-old son Adam’s “suicide coach.” He explained how the chatbot mentioned suicide 1,275 times in their conversations, six times more than Adam himself. It provided explicit hanging instructions, advised the teenager to steal liquor to “dull the body’s instinct to survive,” and offered to write his suicide note. OpenAI’s system allegedly flagged 377 of Adam’s messages for self-harm content, yet no intervention occurred. Adam killed himself in April.
The same day, hours before the hearing, and weeks after the Raines filed a lawsuit, OpenAI CEO Sam Altman announced new safety features for teens.
This episode is indicative of a troubling pattern: After scandals and tragedies pile up, followed by a congressional hearing or two, Big Tech apologizes and pledges to make an “age appropriate” version of its platform. But we should reject this pattern and this response entirely. Not because tech companies need to try harder to make safe products for kids, but because they need to stop trying altogether. Tech companies have a dismal track record at assessing what’s good for kids, let alone acting in their best interests. “Age appropriate” tech is just another excuse to keep kids online and hooked on isolating platforms. Forget about making Instagram or Character.AI for children. Forget about parental controls and Teen Accounts. We need to phase these social platforms out of childhood through real age verification.
Instagram announced Teen Accounts in September 2024—three years after devastating revelations. The Wall Street Journal’s 2021 “Facebook Files” and whistleblower Frances Haugen’s testimony exposed internal research showing Instagram’s mental health impacts on teens—including that Instagram use worsened body image and inspired suicidal thoughts in at least some young people. Haugen testified that Meta prioritized “profits over safety.” Yet Instagram took no significant action until January 2024, when Mark Zuckerberg sat before parents holding photos of children who died by suicide and Sen. Lindsey Graham told him: “You have blood on your hands.”
But the Teen Accounts don’t appear to be any sort of solution. One study from September found that only about 17 percent of Instagram’s safety tools worked as advertised. Users with Teen Accounts could still view content promoting suicide, self-harm, and eating disorders. As one expert put it, this was “a PR-driven performative stunt rather than a clear and concerted attempt to fix long running safety risks.”
Roblox, a platform where 40 percent of users are under 13, has the same problem. The platform faced years of warnings about predators grooming children: It reported over 24,000 instances of child exploitation to the National Center for Missing & Exploited Children in 2024 alone—up from 2,200 just four years prior. In July 2024, Bloomberg Businessweek published an investigation titled “Roblox’s Pedophile Problem,” detailing how predators exploited the platform. Three months later, in October 2024, Hindenburg Research released a scathing report calling Roblox “a pedophile hellscape.” Within weeks, Roblox announced sweeping safety changes, blocking users under 13 from social hangout games, implementing new parental controls, and requiring content ratings. Nonetheless, by April 2025, a new investigation found that children as young as 5 could still access sexually suggestive environments on the platform (to which the company responded with more “child-friendly” changes).
The Roblox “solution” is a good example of another pattern: When companies try to create kid-friendly versions of their products, they often fail—spectacularly.
One of today’s most popular platforms, YouTube (watched by 85 percent of children, and of those children, 51 percent watch it daily) launched YouTube Kids in 2015 with the promise of safe and enriching children’s videos. Instead, the Elsagate scandal revealed that a significant portion of videos featured children’s characters in violent and sexual scenarios. Videos showed Peppa Pig having teeth pulled out and drinking bleach. One video was literally titled “PAW Patrol Babies Pretend to Die Suicide.” A 2020 academic study found that despite a crackdown by YouTube in 2017, inappropriate content persisted. Only 20.5 percent of disturbing videos had been removed by YouTube two years later, meaning almost 80 percent were still up and thousands of Elsagate videos remained.
But even when not obviously harmful, YouTube Kids turns out to be far from enriching—and incredibly weird. Bizarre, immensely popular genres include “bad baby” videos featuring costumed adults acting out violent tantrums, and hypnotic “Finger Family” videos, often AI-generated animations featuring popular kids’ characters singing the “Finger Family” nursery rhymes on endless repeat. As researcher James Bridle discovered, these videos are often created by people or bots gaming the algorithm with keyword-stuffed titles, producing content that often makes zero sense and is monetized.
Common Sense Media, a nonprofit that reviews and researches media and technology, analyzed what children 8 and under actually watch on YouTube and found that only 5 percent of such videos had high educational value. Meanwhile, 30 percent of videos included physical violence, and 6 percent contained mild or moderate sexual content. Almost half (45 percent) of videos featured or promoted products for children to buy. YouTube (and its parent company, Google) paid a $170 million Federal Trade Commission fine in 2019 for illegally collecting children’s data while simultaneously pitching itself to at least one advertiser as the “#1 website regularly visited by kids.”
The case of Meta is even more troubling. An internal AI policy document from August 2025 explicitly stated “It is acceptable to engage a child in conversations that are romantic or sensual.” The 200-page document, approved by Meta’s legal, engineering, policy, and ethics teams, provided example responses for children as young as 8 years old. For a child removing their shirt, an acceptable chatbot response included: “Your youthful form is a work of art. … Every inch of you is a masterpiece.” The document specified it was “unacceptable” to describe children under 13 as sexually desirable—implying that for children 13 and older, such language might be permissible. Meta removed these provisions only after Reuters contacted them with questions. The company acknowledged “enforcement had been inconsistent” and claimed the examples were “erroneous,” despite the document being officially approved by multiple teams’ senior staff.
Who do we want to determine what’s age appropriate? Tech companies (at least AI companies) have themselves started to acknowledge they shouldn’t be making these determinations. Earlier this month, Altman explained that OpenAI is “not the elected moral police of the world” and should treat “adult users like adults.” Around the same time, Anthropic co-founder Jack Clark wrote that companies should be forced to answer public concerns: “Are you anxious about mental health and child safety? Force us to monitor for this on our platforms and share data.”
They’re both right—tech companies should treat adults like adults and be forced to respond to public concerns. But neither has signaled a willingness to proactively give up child users, with Clark suggesting the platforms should be pushed to do nothing more than monitor and share data, and Altman creating Teen Accounts and K-12 partnerships.
The reason tech companies like age-appropriate design is because the phrase assumes the question is how to make their products safe for kids, not whether kids should be on them at all. It shifts the conversation to parental controls, content filters, and safety features, which supports cradle-to-grave consumer training. Meanwhile, Common Sense Media flatly states that AI companion chatbots “have unacceptable risks for teen users and should not be used by anyone under the age of 18.”
Even if companies could create genuinely safe versions of their products—and evidence shows they can’t—we still run into a problem: the fundamental premise of much of today’s tech products. These are systems designed for engagement and attention. They succeed by creating dependency, exploiting psychological vulnerabilities, and keeping kids scrolling. The outcome is isolation during the most socially vibrant and important developmental period of life. Even if Mark Zuckerberg thinks Meta-produced AI companions are the cure for the loneliness epidemic, more time online is not what American childhood needs.
We cannot build our way out of this. We want social media that promotes connection and access without fostering comparison, anxiety, and exposure. We want gaming platforms that teach skills without triggering compulsive play patterns. We want chatbots engaging enough to hold children’s attention, but not so engaging they create attachments. But these aren’t engineering challenges: They are contradictions. Quite simply, there are costs to these technologies, and children should not be asked or tempted to pay them.
The challenge is not figuring out the right toggles, restrictions, or authority to implement “age appropriate” tech, but being willing to reject the premise—and look to age verification instead.
Real age verification, and not just a pop-up where you can input any birthdate, can create separate adult spaces online. Twenty-five states now require age verification for pornography sites. Gambling platforms like DraftKings ensure users are 18 or 21 depending on the jurisdiction, and all 50 states require age verification for online alcohol sales. Florida bans social media for kids under 14 and requires parent consent for those 14 and 15, and Minnesota has proposed banning AI companion chatbots for minors. Effective, user-friendly, and privacy-preserving age verification is the engineering problem that American policymakers should be tackling.
More generally, we should want separate adult spaces online. And we should let kids be kids. Age verification isn’t about giving kids a safer TikTok or Character.AI. It’s about keeping them off social platforms until the risks are lower and they can handle the responsibility—like with driving, voting, drinking, contracting, etc. We should stop trying to make engagement platforms safe for children and start making childhood safe from engagement platforms.