algorithmsassociationchris bestFeaturednazisneo-nazispromotionrecommendationssubstack

Substack’s Algorithm Accidentally Reveals What We Already Knew: It’s The Nazi Bar Now

from the “we-think-you’ll-also-like-nazi-content” dept

Back in April 2023, when Substack CEO Chris Best refused to answer basic questions about whether his platform would allow racist content, I noted that his evasiveness was essentially hanging out a “Nazis Welcome” sign. By December, when the company doubled down and explicitly said they’d continue hosting and monetizing Nazi newsletters, they’d fully embraced their reputation as the Nazi bar.

Last week, we got a perfect demonstration of what happens when you build your platform’s reputation around welcoming Nazis: your recommendation algorithms start treating Nazi content as more than worth tolerating, to content worth promoting.

As Taylor Lorenz reported on User Mag’s Patreon account, Substack sent push notifications to users encouraging them to subscribe to “NatSocToday,” a newsletter that “describes itself as ‘a weekly newsletter featuring opinions and news important to the National Socialist and White Nationalist Community.’”

As you can see, the notification included the newsletter’s swastika logo, leading confused users to wonder why they were getting Nazi symbols pushed to their phones.

“I had [a swastika] pop up as a notification and I’m like, wtf is this? Why am I getting this?” one user said. “I was quite alarmed and blocked it.” Some users speculated that Substack had issued the push alert intentionally in order to generate engagement or that it was tied to Substack’s recent fundraising round. Substack is primarily funded by Andreessen Horowitz, a firm whose founders have pushed extreme far right rhetoric.

“I thought that Substack was just for diaries and things like that,” a user who posted about receiving the alert on his Instagram story told User Mag. “I didn’t realize there was such a prominent presence of the far right on the app.”

Substack’s response was predictable corporate damage control:

“We discovered an error that caused some people to receive push notifications they should never have received,” a spokesperson told User Mag. “In some cases, these notifications were extremely offensive or disturbing. This was a serious error, and we apologize for the distress it caused.”

But here’s the thing about algorithmic “errors”—they reveal the underlying patterns your system has learned. Recommendation algorithms don’t randomly select content to promote. They surface content based on engagement metrics: subscribers, likes, comments, and growth patterns. When Nazi content consistently hits those metrics, the algorithm learns to treat it as successful content worth promoting to similar users.

There may be some randomness involved, and algorithms aren’t perfectly instructive of how a system has been trained, but it at least raises some serious questions about what Substack thinks people will like based on its existing data.

As Lorenz notes, the Nazi newsletter that got promoted has “746 subscribers and hundreds of collective likes on Substack Notes.” More troubling, users who clicked through were recommended “related content from another Nazi newsletter called White Rabbit,” which has over 8,600 subscribers and “is also being recommended on the Substack app through its ‘rising’ leaderboard.”

This isn’t a bug. It’s a feature working exactly as designed. Substack’s recommendation systems are doing precisely what they’re built to do: identify content that performs well within the platform’s ecosystem and surface it to potentially interested users. The “error” isn’t that the algorithm malfunctioned—it’s that Substack created conditions where Nazi content could thrive well enough to trigger promotional systems in the first place.

When you build a platform that explicitly welcomes Nazi content, don’t act surprised when that content performs well enough to trigger your promotional systems. When you’ve spent years defending your decision to help Nazis monetize their content, you can’t credibly claim to be “disturbed” when your algorithms recognize that Nazi content is succeeding on your platform.

The real tell here isn’t the push notification itself—it’s that Substack’s discovery systems are apparently treating Nazi newsletters as content worth surfacing to new users. That suggests these publications aren’t just surviving on Substack, they’re thriving well enough to register as “rising” content worthy of algorithmic promotion.

This is the inevitable endpoint of Substack’s content moderation philosophy. You can’t spend years positioning yourself as the platform that won’t “censor” Nazi content, actively help those creators monetize, and then act shocked when your systems start treating that content as editorially valuable.

This distinction matters enormously in terms of what sort of speech you are endorsing: there’s a world of difference between passively hosting speech and actively promoting it. When Substack defended hosting Nazi newsletters, they could claim they were simply providing infrastructure for discourse. But push notifications and algorithmic recommendations are something different—they’re editorial decisions about what content deserves amplification and which users might be interested in it.

To be clear, that’s entirely protected speech under the First Amendment as all editorial choices are protected. Substack is allowed to promote Nazis. But they should really stop pretending they don’t mean to. They’ve made it clear that they welcome literal Nazis on their platform and now it’s been made clear that their algorithm recognizes that Nazi content performs well.

This isn’t about Substack “supporting free speech”—it’s about Substack’s own editorial speech and what it’s choosing to say. They’re not just saying “Nazis welcome.” They’re saying “we think other people will like Nazi content too.”

And the public has every right to use their own free speech to call out and condemn such a choice. And use their own free speech rights of association to say “I won’t support Substack” because of this.

All the corporate apologies in the world can’t change what their algorithms revealed: when you welcome Nazis, you become the Nazi bar. And when you become the Nazi bar, your systems start working to bring more customers to the Nazis.

Your reputation remains what you allow. But it’s even more strongly connected to what you actively promote.

Filed Under: , , , , , ,

Companies: substack

Source link

Related Posts

1 of 16