1Featured

Funniest/Most Insightful Comments Of The Week At Techdirt

This week, our first place winner on the insightful side is Stephen T. Stone with a rebuke to someone defending the Fifth Circuit’s ruling about whether a cop could sue Twitter:

By the logic of the Fifth Circuit’s rulings, Donald Trump can and should be held responsible for the actions of the rioters on the 6th of January 2021. Is that the position you wish to take?

In second place, it’s a long comment from Azuaron disagreeing with many parts of our post about the verdict against Meta:

Hold up

I don’t wholly agree with this ruling or it’s implications–The Encryption Problem, in particular, is a terrible argument that has to die–but I really have to address this section because it’s not accurate:

The trial judge in the California case bought this argument, ruling that because the claims were about “product design and other non-speech issues,” Section 230 didn’t apply. The New Mexico court reached a similar conclusion. Both cases then went to trial.

This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.

Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?

Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

Instagram has, I’m sure, thousands of videos of paint drying that, I’m also sure, have very few views. Those videos have very few views because part of Instagram’s algorithmic recommendation system is to not serve videos of paint drying to people, because the design goal of Instagram is maximum addiction and use, which would not happen if their algorithm only recommended videos of paint drying.

The scenario of “Instagram, but with videos of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems,” is the scenario we’re in now where we do have people addicted, we do have people harmed, and people are suing. Constraining Instagram to have “only” videos of paint drying is a straw man because it nearly eliminates all the design decisions that caused the harm. So, yeah, if you eliminate all that design that causes harm, the harm isn’t caused, but that’s not what anyone’s talking about.

First, however, let’s start with what Section 230 actually says:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

There’s more that I believe isn’t currently relevant, but by all means look and correct me.

In every day language, what does 230 say? It’s a narrow carve out for responsibility based only on “providers are not necessarily publishers” and “providers can choose what content appears, or does not”.

Now, what are these lawsuits claiming? They claim (I’m going to speak to just Instagram here, but this applies to all the others as well):

  • That Instagram, as a system, has been specifically designed to be addictive
  • That Instagram, as a system, has been specifically designed to worsen the mental health of its users
  • That Instagram, as a system, has been specifically designed to maximize user engagement at the expense of that user
  • That children deserve additional protection–just like children get additional protection from advertisement–from hostile systems because their brains are still developing and they’re particularly vulnerable to it

None of those are content arguments, and saying, “But what if the content was paint drying?” is not relevant or helpful. People aren’t addicted to “a single Instagram video” or even “a single Instagram channel” (you can probably tell I’m not on Instagram; I’m sure they’re not called “channels”). People are addicted to the system of Instagram that feeds them content specifically tailored to maximize addiction and use, and feeds them content in a way that maximizes addiction and use. For some people that’s makeup videos, for some people that’s movie clips; the specific content is not the point. Hell, there’s probably one guy in Minnesota who’s hopelessly addicted to paint drying videos.

The problem, as with practically everything we’re dealing with in the world, is not single bad actors or individual responsibility. The problem is the system, and the system has, in fact as documented in court, been specifically designed to be addictive, to ruin people’s mental health, and to cause harm. The only way we’re going to be able to address this is by focusing on the system.

Finally, we’ve got to address this statement as well:

If every editorial decision about how to present third-party content is now a “design choice” subject to product liability, Section 230 protects effectively nothing. Every website makes decisions about how to display user content. Every search engine ranks results. Every email provider filters spam. Every forum has a sorting algorithm, even if it’s just “newest first.” All of those are “design choices” that could, theoretically, be blamed for some downstream harm.

Instagram’s targeted recommendation and addiction algorithm dark patterns are not the same thing as “newest first”. This is a slippery slope argument with no evidence that such a slope exists. If “newest first” was equally addictive and harmful, Meta would not have spent probably billions creating its various “engagement” systems. This is like saying a lawsuit against a restaurant that poisoned someone with puffer fish will lead to lawsuits against restaurants for selling salmon because they’re both fish.

Another example: we didn’t ban normal darts after we banned lawn darts, despite their similar design decisions, because of the key differences in their design decisions that resulted in clear and obvious differences in their harmful outcomes. No one’s going to get sued for “newest first” specifically because of how it’s different to the engagement algorithms.

The people and companies who make products have always been responsible for the designs of their products when those designs cause harm, from the lawn dart to the Pinto. And, we have long recognized that mental harms are harms: “Intentional infliction of emotional distress”, for instance, has been a recognized tort for decades. That we now have products that cause mental harm is new simply because we didn’t used to have the technology to create those products. But, “products have designs that cause harm” is not a new concept, and neither is “mental harms are tortable harms”.

Furthermore, “every editorial decision” is not now a “design choice”; just the design choices. Providers are–still!–not publishers or speakers of third-party content, and–still!–are not liable for moderation. Nothing in these lawsuits can be reasonably construed to impact decisions to publish–or not–specific content, which is all 230 protects. These lawsuits are, fully, not about the content, any more than California’s ban on Amazon’s dark patterns are a ban on having a web store. This lawsuits are fundamentally not about speech, because the problem is not the speech, but the system around the speech.

That some people might benefit from social media doesn’t negate the harm done to other people, nor make the company not liable for the harm it causes. No matter how many people found joy and friendship playing lawn darts with their friends, that doesn’t resurrect the kids who died, or replace the eyes that were lost. “Someone who was not harmed by lawn darts” would never be invited to a lawsuit about someone who was harmed by lawn darts; that just doesn’t make sense.

I’ve come down pretty hard, here, like I’m fully in favor of these lawsuits. While I definitely believe the nature of these social media sites is specifically designed to be harmful, and we do need a way to address that, ehhhhh, the plaintiffs in these cases made some pretty bad arguments. “Encryption is harmful”, well, guess what, lack of encryption is more harmful! We absolutely can’t be saying that companies are damned if they do, damned if they don’t, and we definitely don’t want to be restricting encryption. As rightly pointed out by the author, mental harms are complex, multifaceted, and it’s difficult to determine a reliable causality; I don’t know enough about the people in question to speak on the analysis that happened here, but it probably wasn’t sufficient. But, that doesn’t mean that such an analysis is impossible, and being on social media for 16 hours a day is certainly a compelling starting point.

So, more broadly speaking, what should we do about it? I don’t know! There’s a needle that needs to be threaded, and I’m not the one to thread it. The big algorithmic social media sites are really bad and I love every cut that someone gets against them, but there were certainly arguments being made on the plaintiff’s side (encryption? Come on!) that were pure BS and bad for everyone.

All that being said, one thing we absolutely must not do is misrepresent the actual harm and problems caused by the systems these companies created, and we need some kind of law or regulation to end it and make them liable for it. Hell, a basic goddamn privacy law would probably get us most of the way there on its own just by cutting down on the fodder that goes into their algorithms. Good luck to us all on that.

For editor’s choice on the insightful side, we start out with a comment from MrWilson about the Trump administration trying to rein in RFK Jr.:

Junior should check the schedule. There might be a bus coming and he might be under it soon.

Next, it’s frankcox with a comment about Brendan Carr lazily trying to ban all foreign routers:

Ban MS Windows instead?

If the objective is to increase Internet security with no regard to secondary/downstream ramifications, then wouldn’t it make more sense to ban Microsoft Windows?

MS Windows has been responsible for more security issues than any other single factor pretty much since from the first day showed up on the Internet.

Over on the funny side, our first place winner is MrWilson again, this time with a comment about learning HTML back in the early days of the web:

This comment is best viewed in Netscape Navigator 3.0.

In second place, it’s Thad with a comment about a bad take from the Washington Post editorial board:

Well jeez, if you can’t trust an unsigned editorial from a paper whose owner has actively and publicly interfered with its content to favor the Trump Administration, who can you trust?

For editor’s choice on the funny side, we start out with a comment from Pixelation about the deployment of “synergy” corporate speak to announce layoffs:

Pushing the envelope

Well, they can use those synergies and circle back to their core competencies, which will streamline the deliverables for a deep dive so they can move the needle. It will be a paradigm shift when everyone has skin in the game!

Finally, it’s Bloof with a comment about the court’s rejection of attempts to take down the DOGE deposition videos:

Once again biased judges fail to protect the most delicate treasure that america owns, the egos of unqualified white men promoted well beyond anything their mediocrity would justify.

That’s all for this week, folks!

Source link

Related Posts

1 of 250