Researchers at ETH Zurich and Anthropic published a paper showing that LLM agents can deanonymize pseudonymous social media accounts at scale. The system matched anonymous posts on Hacker News to real LinkedIn profiles, correctly identifying 226 of 338 targets — 68% recall at 90% precision. It works by browsing the web like a human, piecing together clues from free-form text and cross-referencing across platforms. It also works on Reddit and anonymized interview transcripts. In one test, the LLM correctly re-identified 9 of 125 scientists from anonymized interviews about how they use AI — working from questionnaire answers alone. The co-author Daniel Paleka said he's "very worried" and called it "a large-scale invasion of privacy."

1. The Privacy Deal Is Broken (Daniel Paleka/ETH Zurich, EFF, GovInfoSecurity)

The internet assumed pseudonymity was enough. It's not anymore.

Internet privacy is important. The EFF's Jacob Hoffman-Andrews was direct: "I think there's a lot of value to being pseudo anonymous on the internet, and there are a lot of people who want to maintain [that] for a wide variety of reasons and they shouldn't all need to be experts in how to avoid a really dedicated adversary — as effectively an LLM is."

Even small amounts of identifying information are now dangerous. Hoffman-Andrews said the study "does definitely indicate the degree to which posting even a small amount of identifying information — in contexts where you might not imagine anyone is trying to unmask you — might result in somebody linking that identity."

The implicit threat model is gone. GovInfoSecurity reported that the average user "has long operated under an implicit threat model where they have assumed pseudonymity provides adequate protection because targeted deanonymization would require extensive effort." That assumption just broke. LLMs can now do what took human investigators hours, at scale, automatically.

Precision isn't certainty. 90% precision means 1 in 10 matches is wrong. Run this on thousands of people and you get hundreds of false identifications — real people wrongly linked to accounts that aren't theirs. That's enough to damage reputations, trigger investigations, or ruin careers based on someone else's posts.

2. Careful People Are Still Safe (Bruce Schneier, security-conscious community)

If you were already hiding well, nothing changed.

Bruce Schneier framed the legal limit. An LLM "can't prove beyond a reasonable doubt" that two items came from the same person, so it's not sufficient for criminal conviction. But it might be enough someday to convince a judge to issue a search warrant — or persuade a grand jury to indict.

The study mostly caught people who weren't trying hard. Schneier's blog commenters noted that if you work hard to keep your anonymity, you won't be found as easily at all. The Hacker News-to-LinkedIn pipeline works because many users reference the same projects on both platforms.

3. Somebody Will Use This as a Weapon (GovInfoSecurity, the researchers themselves)

Governments, employers, and stalkers just got an intelligence-grade capability.

The researchers published knowing the risk. Paleka said "this is one of those cases where your freedom stops where the other person's freedom [begins]." One of the co-authors, Nicholas Carlini, works at Anthropic — the company that builds Claude. They published because they believe the conversation needs to happen before the capability spreads.

The use cases are obvious and alarming. GovInfoSecurity reported that governments, corporations, and attackers could exploit the capability for surveillance, hyper-targeted advertising, and personalized social engineering. In authoritarian nations, it could present "greater challenges to dissidents, human rights activists, journalists and others who rely on anonymity or pseudo-anonymity to operate safely."

Defenses exist but aren't deployed. The researchers proposed that platforms enforce rate limits on API access, detect automated scraping, and block bulk data exports. LLM providers could monitor for deanonymization misuse and build guardrails. None of this is happening yet.

Where This Lands

The tool exists. The paper is public. The 90% precision number is going to show up in every privacy debate for years. Whether this changes anything depends on who picks it up first — a government surveillance program, a corporate HR department, or a stalker with a laptop. The researchers are pushing for defenses. The EFF is sounding alarms. And everyone with a burner account just learned that writing style is a fingerprint they can't wash off.

Sources