
While speaking at the Hill and Valley Forum in Washington, D.C., last week White House AI czar David Sacks described the guiding principles behind the administration’s approach to technology and child safety. “Our North Star on that issue has been parental empowerment—we want to ultimately allow parents to decide what’s right for their children.”
Sacks, who is now stepping aside from his role as AI czar but will continue to advise the Trump White House, emphasized that critiques suggesting that such an approach may place too much burden on parents are “nonsensical. … This is what it means to be a parent. Do you want the government to decide what kids should see?”
The discussion came days after the White House released a four-page AI policy framework outlining the administration’s recommendations for what a single national rulebook governing AI might look like—including guidance on child safety. The White House hopes a national framework will swiftly pass through Congress and preempt state legislation, heading off its fears of a regulatory patchwork that could hinder AI innovation.
But divisions within the Republican Party over how to address child safety risks may only become more bitter, and could threaten to unravel the coalition needed to produce the national AI framework the White House is hoping for.
Republicans such as Sens. Marsha Blackburn of Tennessee and Josh Hawley of Missouri have advocated for a different approach than the White House’s, pushing for greater legal liability and stronger regulations on technology companies themselves—an approach that Sacks has attacked. Legal verdicts last week may only increase tension over AI and child safety: A New Mexico jury ordered social media giant Meta to pay $375 million for knowingly enabling child sexual exploitation on its platforms, while a Los Angeles jury held Meta and YouTube liable for negligently designing addictive products that harmed the mental health of minors. “The rulings in New Mexico and Los Angeles underscore the critical need for Congress to impose a duty of care on social media and AI companies,” Michael Toscano, director of the Family First Technology Initiative at conservative Institute for Family Studies, told The Dispatch.
Republicans had already been pushing for national AI regulation, placing greater legal responsibilities on technology companies regarding child safety. Just two days before the White House released its policy framework, Blackburn released her own updated draft proposal of a national AI policy—nearly 300 pages long—called the TRUMP AMERICA AI Act. In a statement to The Dispatch, Blackburn framed the proposal as a way to support innovation while protecting the “4 C’s” (children, creators, conservatives, and communities). “Over the last few months, I have worked diligently with the White House, conservative leaders, child safety advocates, members of the creative community, and AI innovators to develop legislation that can garner bipartisan support and accomplish the president’s goals,” she told The Dispatch.
Blackburn’s proposal draws on legislation already previously introduced in the Senate but that has stalled since. That proposed legislation what Toscano deems the “gold standard” for child safety legislation. It includes Blackburn’s Kids Online Safety Act (KOSA), which imposes a “duty of care” on social media platforms—legal obligations requiring companies to take “reasonable care” to prevent and reduce harm to minors through design decisions, such as infinite scrolling features and in-game purchases. But Blackburn’s new bill extends this to include AI developers as well as social media companies—an increasingly salient topic following a number of high-profile suicides involving minors who were allegedly encouraged to take their own lives by AI chatbots.
Blackburn’s proposal also incorporates the GUARD Act—co-sponsored by Hawley and Democratic Sen. Richard Blumenthal of Connecticut—which requires AI chatbot providers to implement robust age verification measures, while banning anyone under age 18 from using companion” chatbots that simulate interpersonal or emotional interactions.
While Blackburn’s new bill takes a muscular approach to child safety through including a “duty of care,” it is unclear whether the White House framework will take the same approach. The framework notes that “Congress should avoid setting ambiguous standards about permissible content, and open-ended liability, that could give rise to excessive litigation.” Yet what is meant by “ambiguous standards” is itself ambiguous—it could either be a warning against the “duty of care” provision contained in KOSA, or it could simply be a caution against vague content moderation rules—in which case it may align more closely with Blackburn. “It could credibly be argued that it would be supportive of what Sen. Blackburn is doing, or it could be argued that it’s against, and it’s hard to really know actually until the actual negotiations begin,” said Toscano.
Child safety legislation that puts greater responsibility on technology companies is popular among many Senate Republicans. Yet efforts to pass such legislation face significant headwinds. Blackburn’s KOSA has 75 co-sponsors, including 41 Republicans but has not received a floor vote. Meanwhile, earlier this month a 12-bill child safety omnibus package known as the KIDS Act—which has been criticized by child safety advocacy groups for removing the “duty of care” mentioned in KOSA—passed the House Energy and Commerce Committee and has advanced to a vote on the House floor.
Many Republicans may be ready for a fight over child safety legislation. “In the physical world, we have laws on the books to protect children from harm. The virtual space should be no different,” Blackburn told The Dispatch in a statement. After the KIDS Act was introduced in the House, right-wing news outlet The Daily Signal reported that a bipartisan group of 20 Texas state senators wrote a letter to the state’s U.S. senators, expressing “serious concern” over the bill, given its weakness relative to the Senate’s KOSA legislation.
Last week, a new coalition of conservative groups formed to focus on AI and child safety. The Alliance for a Better Future includes organizations such as the National Center on Sexual Exploitation, the Institute for Family Studies, and the American Principles Project.
It has already endorsed Hawley’s GUARD Act, signalling its support for applying greater pressure on technology companies when it comes to child safety. “The overwhelming majority of Americans, especially Republican primary voters, believe we can have both world-leading AI, and commonsense safeguards at the same time. … We will be hammering this over and over again in Washington,” Alliance for a Better Future CEO Janet Kelly told The Dispatch.
Meanwhile, the White House has announced the first members of the President’s Council of Advisers on Science and Technology, including Marc Andreessen, a venture capitalist and author of the Techno-Optimist Manifesto, which argues that technology can answer virtually all human problems. Also tapped for the council was Meta CEO Mark Zuckerberg (Elon Musk was conspicuously absent from the list.)
Any ensuing fight among Republicans may be fierce, with critics arguing that such a bill could harm free speech and hamper innovation. Meta and Google are reportedly spending vast amounts of money to prevent such provisions from becoming law.
Blackburn—deemed the most conservative member of Congress in 2020 by nonpartisan congressional tracking website GovTrack—has a track record of standing up to the White House over child safety and AI. Last summer, the Trump administration attempted to push through a 10-year moratorium on state AI laws by attaching it to the One Big Beautiful Bill Act. Blackburn initially worked with Republican Sen. Ted Cruz on a compromise, shortening the moratorium to five years, but she later pulled out, citing insufficient protection for creators, conservatives, and children. Cruz later told PBS News that had Blackburn not pulled out, the bill could have “easily” passed. Hawley has also emphasized child safety, reportedly describing it as the top priority when it comes to AI regulation.
Toscano fears that in an attempt to try and ram through a national framework, the Trump administration may try to use a potential upcoming reconciliation bill—a procedural shortcut for budget-related legislation to bypass the Senate’s usual requirement of 60 votes—to force through a national AI framework.
“I think this would be very good for the AI industry, but I think this would be very bad for the Republican Party if they did that,” Toscano said. “Voters would hold them accountable for it.”
















