1age verificationcredentialsFeaturednerd harderprivacysteve bellovintokenszero knowledge proofs

Privacy‑Preserving Age Verification Falls Apart On Contact With Reality

from the seems-bad dept

Here we go again. Whenever policy makers insist that there’s some “nerd harder” solution to tricky societal problems, actual experts have to spend a ridiculous amount of time explaining basic realities to them. Sometimes those are realities about the technology. And sometimes it’s realities about the technology.

This time it’s age verification’s turn.

Steve Bellovin—one of the most respected security researchers out there, and instrumental in showing why “safe” crypto backdoors can’t exist—just published a short paper arguing that so‑called privacy‑protecting (“zero‑knowledge”) age verification can exist in theory, but not in practical reality.

Bellovin walks through the proposed architectures and then hits a variety of “insurmountable obstacles” that break privacy once you leave the whiteboard and touch reality. This isn’t all of them, but here are a few of the important points from his paper.

Identity‑proofing creates a privacy bottleneck. Somewhere, an identity provider must verify you. Even if it later mints an unlinkable token, that provider is the weak link—and in regulated systems it will not be allowed to “just delete” your information. As Bellovin puts it:

Regulation implies the ability for governments to audit the regulated entities’ behavior. That in turn implies that logs must be kept. It is likely that such logs would include user names, addresses, ages, and forms of credentials presented.

Then there’s the issue of fraud and duplication of credentials. Accepting multiple credential types increases coverage and increases abuse; people can and do hold multiple valid IDs:

The fact that multiple forms of ID are acceptable… exacerbates the fraud issue…This makes it impossible to prevent a single person from obtaining multiple primary credentials, including ones for use by underage individuals.

Cost and access will absolutely chill speech. Identity providers are expensive. If users pay, you’ve built a wealth test for lawful speech. If sites pay, the costs roll downhill (fees, ads, data‑for‑access) and coverage narrows to the cheapest providers who may also be more susceptible to breaches:

Operating an IDP is likely to be expensive… If web sites shoulder the cost, they will have to recover it from their users. That would imply higher access charges, more ads (with their own privacy challenges), or both.

Sharing credentials drives mission creep, which will create dangers with the technology. If a token proves only “over 18,” people will share it (parents to kids, friends to friends). To deter that, providers tie tokens to identities/devices or bundle more attributes—making them more linkable and more revocable:

If the only use of the primary credential is obtaining age-verifying subcredentials, this isn’t much of a deterrent—many people simply won’t care…That, however, creates pressure for mission creep… , including opening bank accounts, employment verification, and vaccination certificates; however, this is also a major point of social control, since it is possible to revoke a primary credential and with it all derived subcredentials.

The end result, then is you’re not just attacking privacy again, but you’re creating a tool for authoritarian pressure:

Those who are disfavored by authoritarian governments may lose access not just to pornography, but to social media and all of these other services.

He also grounds it in lived reality, with a case study that shows who gets locked out first:

Consider a hypothetical person “Chris”, a non-driving senior citizen living with an adult child in a rural area of the U.S… Apart from the expense— quite possibly non-trivial for a poor family—Chris must persuade their child to then drive them 80 kilometers or more to a motor vehicles office…

There is also the social aspect. Imagine the embarrassment to all of an older parent having to explain to their child that they wish to view pornography.

None of this is an attack on the math. It’s a reminder that deployment reality ruins the cryptographic ideal. There’s more in the paper, but you get the idea.

The history here is important. Three years ago, France’s CNIL reviewed age‑gating tech and found it all terrible for privacy, then floated a zero‑knowledge demo. EU officials promptly said “yeah do that” as part of a broader internet ID push, which digital rights folks correctly flagged as a privacy/regulatory mess.

Stateside, the Foundation for American Innovation published a paper this February with the cute title “On the Internet, No One Knows You’re a Dog,” which now appears to have vanished from their website (?!?) but not before NY State Senator Andrew Gounardes—who’s never met a bad internet bill he didn’t support—cited it to push a statewide age‑verification law. (You can still find the paper via the Internet Archive, though it’s pretty much vanished from Google search…)

I should note how this also seems like yet another example of “protect the children!” moral panics crossing traditional partisan lines. Here’s an idea being pushed by aggressive technocrats in the EU… and then picked up excitedly by FAI, a right-leaning organization with close ties to the Trump White House (even as it keeps criticizing the EU approach to regulating the internet), and then used by a liberal Democrat in NY to justify a bad law.

This cross-partisan embrace of “privacy-preserving” age verification should terrify anyone who values civil liberties. When aggressive EU technocrats, Trump-aligned think tanks, and supposedly progressive Democrats all rally behind the same surveillance infrastructure—each convinced they’re the good guys—you’re witnessing the construction of an authoritarian tool that will outlast any particular administration’s priorities.

Meanwhile, because the conservatives on the Supreme Court decided they can toss decades of First Amendment precedent around age verification because they’re offended by naked people online, the stakes here aren’t hypothetical.

Privacy advocates are in the same place Bellovin is. EFF’s recent summary is blunt about what zero‑knowledge proofs can’t do in this context:

What ZKPs don’t do is mitigate verifier abuse or limit their requests, such as over-asking for information they don’t need or limiting the number of times they request your age over time. They don’t prevent websites or applications from collecting other kinds of observable personally identifiable information like your IP address or other device information while interacting with them.

ZKPs are a great tool for sharing less data about ourselves over time or in a one time transaction. But this doesn’t do a lot about the data broker industry that already has massive, existing profiles of data on people… Going from presenting your physical ID maybe 2-3 times a week to potentially proving your age to multiple websites and apps every day online is going to render going online itself as a burden at minimum and a barrier entirely at most for those who can’t obtain an ID.

There are absolutely contexts where ZK proofs can reduce disclosure—closed ecosystems, narrow deployments, no legal logging/audit mandates, low adversarial pressure, and little incentive to share credentials. That is not what these laws create. They create audit trails, liability, and incentives that recreate linkability.

A few months back we had professor Eric Goldman on the podcast to talk about his excellent paper on age verification/assurance. His bottom line matched Bellovin’s deployment‑reality critique: the tech creates serious harms regardless of branding. “Zero‑knowledge” doesn’t change the incentives, the governance, or the fact that someone, somewhere, has to check your ID and keep enough records to satisfy auditors and courts.

Lawmakers who want to control the internet will keep waving around “privacy‑preserving” as cover (Hi Senator Gounardes!). Bellovin just explained, with receipts, why that cover doesn’t actually protect privacy. It adds identity friction to lawful speech, supercharges data linkage, and hands governments and intermediaries a revocation switch. That’s not child protection; it’s infrastructure for control.

Filed Under: , , , , , ,

Source link

Related Posts

1 of 16