from the doing-the-impossible-and-the-necessary dept
Here’s the contradiction at the heart of the internet: everyone complains about content moderation, but no one wants to use an unmoderated platform. Everyone thinks trust & safety professionals are either censorial scolds or corporate lackeys, but everyone expects them to magically solve the inherent problems of human behavior at scale.
I spent last week at TrustCon, the premier annual conference for the trust and safety industry (you can see our live podcast, Ctrl-Alt-Speech, here), listening to people trying to square this impossible circle.
The trust & safety space is a strange one in that I think it’s one of the least understood but most important industries that impacts tools most people use every day. So many people misunderstand the role of trust & safety professionals often (falsely) thinking that they are censors (they are not) or that they have some magical ability to stop bad things on the internet (they do not).
As Alex Feerst wrote for us a few years ago, trust and safety is largely made up of caring people who really, really are trying to make the internet a better place, figuring out the best way to create rules for specific platforms that encourage good/helpful/useful behavior and minimize bad/dangerous/risky behavior. Every platform has different rules and incentive structures, but every platform that hosts third party speech needs to have some sort of rules, with some sort of enforcement mechanisms.
This is true elsewhere as well. City council meetings have rules. Your home owners association has processes. The book club you’re in has some guidelines. The idea is that in a society where groups form for common purposes, there always need to be some sort of principles to help people get along without causing damage. And each group can create their own rules and their own enforcement—some may be stricter than others, but people get to decide which sites and which services with which rules and enforcement they feel most comfortable with, and use those.
A ton of trust & safety people legitimately care about making services work better and be safer for everyone. For all the talk of how they’re “censors,” a huge percentage of them come out of human rights and free speech work. For all the talk of how they’re holding back progress, a ton of them are there because they believe in the power of the internet to make the world better and are trying to help.
And yet many people believe that trust & safety people are something they are not. There are those, like Marc Andreessen (who as a board member for Meta and many other internet companies absolutely should know better), who believe that trust & safety is “the enemy of progress.” Or there are those, like Jim Jordan, who falsely believe that those keeping the internet safe are engaged in a vast conspiracy of censorship for merely enforcing the rules on private platforms.
But there are also those who seem to imagine too much power in the hands of trust & safety professionals and expect them to fix the inherent problems of humanity and society, something they could never do in the first place. We see regulators who think that companies need to be forced into doing things because their CEOs don’t care—but all that really tends to do is limit the ability of trust & safety folks to craft better, more creative solutions to the problems that human users bring to any service. Because as more government mandates come down from on high, the more those companies are forced to start checking boxes to show “compliance” rather than letting trust & safety experts actually figure out what works best.
It is a thankless task, but a necessary task. It’s not about censorship or holding back progress. Quite the opposite. It’s about making the wider internet safer for more people so that more people can benefit from the internet.
And my takeaway from TrustCon is that it’s being made much, much harder by those who don’t understand it.
One interesting thing I heard from multiple people was how the success of Jonathan Haidt’s “Anxious Generation” book had been a huge disaster for the entire field of trust & safety on multiple levels. I heard it from a couple different people on the first day and it took me a bit by surprise. After all, the book has been out for over a year, and while I have written about how it’s full of garbage cherry picked and misleading research that has done real damage in terms of the public, the media, and policymakers’ beliefs about child safety online, I had not considered as much the impact on actual online safety.
As I learned from talking to folks in the field, the success of the book harmed safety efforts in multiple ways:
- The book has cemented the false narrative that social media is inherently harmful to all kids, despite plenty of evidence to the contrary (the evidence suggests social media is very helpful for some kids, neutral for most, and only harmful for a very small number—and often the harm is because of other, untreated, issues).
- This means that so much effort has been put towards the backwards, impossible, and harmful goal of blocking kids entirely from social media, which means it has taken away from actual interventions that help. That is, it has resulted in less work on better safety tools for kids and better education of how to use the tools in an age appropriate way, with almost all of the efforts being focused on questionable and privacy-destroying age verification.
- The narrative also further cements the idea that there is no role for thoughtful trust & safety interventions, assuming that a pure abstinence approach is the only possible approach.
In many ways, it’s an almost exact replica of failed “abstinence only” efforts around drugs, alcohol, and sex. We’re making the same damn mistake all over again, and so many people are willing to trust it because it’s in a best-selling book.
But there was a larger through-line in some of the discussions, which revolved around how the general “worldview” has changed on trust and safety, with attacks coming from across the broad political spectrum. You have some places, including the EU, the UK, and Australia, where governments haven’t bothered to understand the natural trade-offs of trust & safety and seem to think they can regulate the internet to safety. Meanwhile, books like Haidt’s have convinced policymakers that the solution to online harms is keeping kids off platforms entirely, rather than building better safety tools and better educating kids how to use services appropriately.
That’s a fool’s errand.
At the other extreme, you have the MAGA VC world who falsely believe that trust & safety is about censorship and is evil and shouldn’t exist at all. And, in the US, those people currently have tremendous power, leading to nonsense from companies like Meta and X to falsely imply that they can remove safeguards and guardrails and nothing will go wrong. So far, that hasn’t worked out too well, mainly because it’s based on a totally faulty understanding of how all this works.
The end result is… not great.
Trust & safety professionals I spoke to at TrustCon kept talking about how this environment has done tremendous damage to their ability to actually keep things safe. I heard Daphne Keller’s piece about how regulators were turning trust & safety into a compliance function so many times at TrustCon, I think it was the unofficial article of the conference.
This gets to the heart of what I heard over and over: trust & safety professionals want to create better, safer services online, but feel trapped between impossible demands. Regulators want them to solve problems that can’t be solved through content moderation. Politicians attack them as censors. Best-selling authors like Jonathan Haidt blame them for harming children when they’re desperately trying to help, pushing everyone toward the failed “abstinence-only” approach of blocking kids from platforms entirely rather than building age-appropriate safety tools.
It’s an impossible situation to deal with, especially for a bunch of people who mostly, legitimately, are just trying to get people to play nice online in order to enjoy the wonder that is a global communication network.
Making this worse, some commentators are drawing exactly the wrong lessons from this crisis. Dean Jackson’s “realist’s perspective” on trust & safety concludes that only “state power” can realistically fix things—missing that regulations are already making trust & safety harder by forcing compliance theater over actual safety work.
A realist assessment of the current moment suggests that one force capable of moving tech titans in a better direction—perhaps the only force short of a mass consumer movement—is state power.
This echoes what I heard about Haidt’s influence: his book’s success has convinced everyone that social media is inherently harmful to all kids, sucking resources away from nuanced interventions that might actually help. Instead of building better safety tools and a better overall ecosystem, everyone’s fixated on impossible age verification schemes. It’s abstinence-only education for the internet age. It’s creating “trust and safety theater” rather than actually building up either trust or safety.
We talked about this on the podcast, where I noted I find Dean one of the more thoughtful journalists on this beat, but I felt this piece missed the mark. The piece is nuanced and certainly discusses the tradeoffs here, but seems to view the current state—where trust & safety is viewed so negatively across the board—as a kind of permanent state, and concludes that the only real way to deal with this is via state power telling the companies what to do.
It strikes me as an odd conclusion that we need “state power” to make a better internet, when one of the big takeaways from the conference was how regulations are consistently making it more difficult for trust & safety folks to do their job well, and are instead focused on regulatory compliance—checking boxes to keep regulators happy, rather than implementing systems and policies that actually keep people safer online.
This impossible situation helps explain why Casey Newton’s similar critique of the industry—that trust & safety leaders are, effectively, cowering and unwilling to speak up—also misses the mark. Newton wants them to quit their jobs and write op-eds defending their work. But why would they?
Look what happened to Yoel Roth, Twitter’s former head of trust & safety, when he spoke truthfully about his work: he was lied about, doxxed, and driven from his home. That’s the reward you get for public honesty in this space.
Newton’s frustration is understandable, but his solution—public martyrdom—ignores the basic risk assessment these professionals do for a living. When regulators, politicians, and pundits are all attacking you from different angles, going public just paints a bigger target on your back. And for what benefit? Better to keep your head down, check whatever boxes keep the wolves at bay, and try to actually make things better from within the constraints you’re given.
It’s not ideal, but it’s the kind of thing that people who care will do in these wild and ridiculous times.
I will note that many of the sessions at TrustCon this year (way more than in the past) were off-limits to the media. TrustCon lanyard badges indicate if you’re in the media or not (every year they accidentally give me a different lanyard, and I remind them that I’m press and should be given the press lanyard). It sucks a bit for me, because it means I don’t get to go to those sessions, but given the state of everything, it’s totally understandable.
Trust & safety folks are legitimately working to make the services you and I rely on better. It’s an impossible task. You and I will disagree with decisions made on every platform. I guarantee you that people who work at these platforms will also disagree with some of the decisions made because there are no right answers. There are, as one friend in trust & safety likes to say, “only least bad options.” All options are bad.
But making the “least bad” decision still involves careful thought, deliberation, and understanding of tradeoffs. TrustCon is an opportunity to share those difficult discussions and to think through the tradeoffs and considerations. And given how the press so frequently misrepresents those tradeoffs, it’s not the least bit surprising to me that many sessions decided to keep them out.
So what’s the solution? Stop expecting magic from people doing impossible work.
Trust & safety professionals aren’t going to solve humanity’s problems. They can’t make perfect content moderation decisions at the scale of billions. They can’t eliminate all harmful content without also eliminating lots of legitimate speech. These are not bugs in the system—they’re features of trying to moderate human behavior at internet scale.
Instead of attacking them from all sides, we need to create space for them to make the least bad choices available. That means regulators who understand the natural tradeoffs involved rather than demanding impossible outcomes. It means politicians who don’t scapegoat them for broader social problems. And it means recognizing that while their work is imperfect, the alternative—no moderation at all—is far worse.
The internet isn’t broken because trust & safety is doing a bad job. It’s strained because we’ve asked them to solve problems that can’t be solved through content moderation alone. Until we acknowledge that basic reality, we’ll keep making their impossible job even harder.
Filed Under: censorship, content moderation, internet, jonathan haidt, progress, trust & safety, trustcon