cjeucontrollerdata protectiondsaFeaturedgdprintermediary liabilityrussmediasection 230sensitive datauser generated content

EU’s Top Court Just Made It Literally Impossible To Run A User-Generated Content Platform Legally

from the seems-like-a-problem dept

The Court of Justice of the EU—likely without realizing it—just completely shit the bed and made it effectively impossible to run any website in the entirety of the EU that hosts user-generated content.

Obviously, for decades now, we’ve been talking about issues related to intermediary liability, and what standards are appropriate there. I am an unabashed supporter of the US’s approach with Section 230, as it was initially interpreted, which said that any liability should land on the party who contributed the actual violative behavior—in nearly all cases the speaker, not the host of the content.

The EU has always held itself to a lower standard of intermediary liability, first with the E-Commerce Directive and more recently with the Digital Services Act (DSA), which still generally tries to put more liability on the speaker but has some ways of shifting the liability to the platform.

No matter which of those approaches you think is preferable, I don’t think anyone could (or should) favor what the Court of Justice of the EU came down with earlier this week, which is basically “fuck all this shit, if there’s any content at all on your site that includes personal data of someone you may be liable.”

As with so many legal clusterfucks, this one stems from a case with bad facts, which then leads to bad law. You can read the summary as the CJEU puts it:

The applicant in the main proceedings claims that, on 1 August 2018, an unidentified third party published on that website an untrue and harmful advertisement presenting her as offering sexual services. That advertisement contained photographs of that applicant, which had been used without her consent, along with her telephone number. The advertisement was subsequently reproduced identically on other websites containing advertising content, where it was posted online with the indication of the original source. When contacted by the applicant in the main proceedings, Russmedia Digital removed the advertisement from its website less than one hour after receiving that request. The same advertisement nevertheless remains available on other websites which have reproduced it.

And, yes, no one is denying that this absolutely sucks for the victim in this case. But if there’s any legal recourse, it seems like it should be on whoever created and posted that fake ad. Instead, the CJEU finds that Russmedia is liable for it, even though they responded within an hour and took down the ad as soon as they found out about it.

The lower courts went back and forth on this, with a Romanian tribunal (on first appeal) finding, properly, that there’s no fucking way Russmedia should be held liable, seeing as it was merely hosting the ad and had nothing to do with its creation:

The Tribunalul Specializat Cluj (Specialised Court, Cluj, Romania) upheld that appeal, holding that the action brought by the applicant in the main proceedings was unfounded, since the advertisement at issue in the main proceedings did not originate from Russmedia, which merely provided a hosting service for that advertisement, without being actively involved in its content. Accordingly, the exemption from liability provided for in Article 14(1)(b) of Law No 365/2002 would be applicable to it. As regards the processing of personal data, that court held that an information society services provider was not required to check the information which it transmits or actively to seek data relating to apparently unlawful activities or information. In that regard, it held that Russmedia could not be criticised for failing to take measures to prevent the online distribution of the defamatory advertisement at issue in the main proceedings, given that it had rapidly removed that advertisement at the request of the applicant in the main proceedings.

With the case sent up to the CJEU, things get totally twisted, as they argue that under the GDPR, the inclusion of “sensitive personal data” in the ad suddenly makes the host a “joint controller” of the data under that law. As a controller of data, the much stricter GDPR rules on data protection now apply, and the more careful calibration of intermediary liability rules get tossed right out the window.

And out the window, right with it, is the ability to have a functioning open internet.

The court basically shreds basic intermediary liability principles here:

In any event, the operator of an online marketplace cannot avoid its liability, as controller of personal data, on the ground that it has not itself determined the content of the advertisement at issue published on that marketplace. Indeed, to exclude such an operator from the definition of ‘controller’ on that ground alone would be contrary not only to the clear wording, but also the objective, of Article 4(7) of the GDPR, which is to ensure effective and complete protection of data subjects by means of a broad definition of the concept of ‘controller’.

Under this ruling, it appears that any website that hosts any user-generated content can be strictly liable if any of that content contains “sensitive personal data” about any person. But how the fuck are they supposed to handle that?

The basic answer is to pre-scan any user-generated content for anything that might later be deemed to be sensitive personal data and make sure it doesn’t get posted.

How would a platform do that?

¯\_(ツ)_/¯

There is no way that this is even remotely possible for any platform, no matter how large or how small. And it’s even worse than that. As intermediary liability expert Daphne Keller explains:

The Court said the host has to

  • pre-check posts (i.e. do general monitoring)
  • know who the posting user is (i.e. no anonymous speech)
  • try to make sure the posts don’t get copied by third parties (um, like web search engines??)

Basically, all three of those are effectively impossible.

Think about what the court is actually demanding here. Pre-checking posts means full-scale automated surveillance of every piece of content before it goes live—not just scanning for known CSAM hashes or obvious spam, but making subjective legal determinations about what constitutes “sensitive personal data” under the GDPR. Requiring user identification kills anonymity entirely, which is its own massive speech issue. And somehow preventing third parties from copying content? That’s not even a technical problem—it’s a “how do you stop the internet from working like the internet” problem.

Some people have said that this ruling isn’t so bad, because the ruling is about advertisements and because it’s talking about “sensitive personal data.” But it’s difficult to see how either of those things limit this ruling at all.

There’s nothing inherently in the law or the ruling that limits its conclusions to “advertisements.” The same underlying factors would apply to any third party content on any website that is subject to the GDPR.

As for the “sensitive personal data” part, that makes little difference because sites will have to scan all content before anything is posted to guarantee no “sensitive personal data” is included and then accurately determine what a court might later deem to be such sensitive personal data. That means it’s highly likely that any website that tries to comply under this ruling will block a ton of content on the off chance that maybe that content will be deemed sensitive.

As the court noted:

 In accordance with Article 5(1)(a) of the GDPR, personal data are to be processed lawfully, fairly and in a transparent manner in relation to the data subject. Article 5(1)(d) of the GDPR adds that personal data processed must be accurate and, where necessary, kept up to date. Thus, every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay. Article 5(1)(f) of that regulation provides that personal data must be processed in a manner that ensures appropriate security of those data, including protection against unauthorised or unlawful processing.

Good luck figuring out how to do that with third-party content.

And they’re pretty clear that every website must pre-scan every bit of content. They claim it’s about “marketplaces” and “advertisements” but there’s nothing in the GDPR that limits this ruling to those categories:

Accordingly, inasmuch as the operator of an online marketplace, such as the marketplace at issue in the main proceedings, knows or ought to know that, generally, advertisements containing sensitive data in terms of Article 9(1) of the GDPR, are liable to be published by user advertisers on its online marketplace, that operator, as controller in respect of that processing, is obliged, as soon as its service is designed, to implement appropriate technical and organisational measures in order to identify such advertisements before their publication and thus to be in a position to verify whether the sensitive data that they contain are published in compliance with the principles set out in Chapter II of that regulation. Indeed, as is apparent in particular from Article 25(1) of that regulation, the obligation to implement such measures is incumbent on it not only at the time of the processing, but already at the time of the determination of the means of processing and, therefore, even before sensitive data are published on its online marketplace in breach of those principles, that obligation being specifically intended to prevent such breaches.

No more anonymity allowed:

As regards, in the second place, the question whether the operator of an online marketplace, as controller of the sensitive data contained in advertisements published on its website, jointly with the user advertiser, must verify the identity of that user advertiser before the publication, it should be recalled that it follows from a combined reading of Article 9(1) and Article 9(2)(a) of the GDPR that the publication of such data is prohibited, unless the data subject has given his or her explicit consent to the data in question being published on that online marketplace or one of the other exceptions laid down in Article 9(2)(b) to (j) is satisfied, which does not, however, appear to be the case here.

On that basis, while the placing by a data subject of an advertisement containing his or her sensitive data on an online marketplace may constitute explicit consent, within the meaning of Article 9(2)(a) of the GDPR, such consent is lacking where that advertisement is placed by a third party, unless that party can demonstrate that the data subject has given his or her explicit consent to the publication of that advertisement on the online marketplace in question. Consequently, in order to be able to ensure, and to be able to demonstrate, that the requirements laid down in Article 9(2)(a) of the GDPR are complied with, the operator of the marketplace is required to verify, prior to the publication of such an advertisement, whether the user advertiser preparing to place the advertisement is the person whose sensitive data appear in that advertisement, which presupposes that the identity of that user advertiser is collected.

Finally, as Keller noted above, the CJEU seems to think it’s possible to require platforms to make sure content is never displayed on any other platform as well:

 Thus, where sensitive data are published online, the controller is required, under Article 32 of the GDPR, to take all technical and organisational measures to ensure a level of security apt to effectively prevent the occurrence of a loss of control over those data.

To that end, the data controller must consider in particular all technical measures available in the current state of technical knowledge that are apt to block the copying and reproduction of online content.

Again, the CJEU appears to be living in a fantasy land that doesn’t exist.

This is what happens when you over-index on the idea of “data controllers” needing to keep data “private.” Whoever revealed sensitive data should have the liability placed on them. Putting it on the intermediary is misplaced and ridiculous.

There is simply no way to comply with the law under this ruling.

In such a world, the only options are to ignore it, shut down EU operations, or geoblock the EU entirely. I assume most platforms will simply ignore it—and hope that enforcement will be selective enough that they won’t face the full force of this ruling. But that’s a hell of a way to run the internet, where companies just cross their fingers and hope they don’t get picked for an enforcement action that could destroy them.

There’s a reason why the basic simplicity of Section 230 makes sense. It says “the person who creates the content that violates the law is responsible for it.” As soon as you open things up to say the companies that provide the tools for those who create the content can be liable, you’re opening up a can of worms that will create a huge mess in the long run.

That long run has arrived in the EU, and with it, quite the mess.

Filed Under: , , , , , , , ,

Companies: russmedia

Source link

Related Posts

1 of 157