🤖 AI Summary
Traditional anonymization methods are vulnerable to large language model (LLM) agents, which can reconstruct real identities through inference-driven linkage mechanisms by exploiting sparse, weak cues in conjunction with publicly available information. This work introduces a new paradigm—“inference-driven linkage”—to demonstrate that identity inference itself constitutes a fundamental privacy threat. To systematically evaluate LLMs’ de-anonymization capabilities under varying intent and knowledge conditions, the authors develop InferLink, a controllable benchmark framework. Experiments on the Netflix Prize dataset reveal that LLM agents achieve a 79.2% identity reconstruction rate, substantially outperforming the 56.0% attained by classical baselines. Notably, such identity linkage occurs even in non-adversarial tasks, underscoring the pervasive nature of this privacy risk.
📝 Abstract
Anonymization is widely treated as a practical safeguard because re-identifying anonymous records was historically costly, requiring domain expertise, tailored algorithms, and manual corroboration. We study a growing privacy risk that may weaken this barrier: LLM-based agents can autonomously reconstruct real-world identities from scattered, individually non-identifying cues. By combining these sparse cues with public information, agents resolve identities without bespoke engineering. We formalize this threat as \emph{inference-driven linkage} and systematically evaluate it across three settings: classical linkage scenarios (Netflix and AOL), \emph{InferLink} (a controlled benchmark varying task intent, shared cues, and attacker knowledge), and modern text-rich artifacts. Without task-specific heuristics, agents successfully execute both fixed-pool matching and open-ended identity resolution. In the Netflix Prize setting, an agent reconstructs 79.2\% of identities, significantly outperforming a 56.0\% classical baseline. Furthermore, linkage emerges not only under explicit adversarial prompts but also as a byproduct of benign cross-source analysis in \emph{InferLink} and unstructured research narratives. These findings establish that identity inference -- not merely explicit information disclosure -- must be treated as a first-class privacy risk; evaluations must measure what identities an agent can infer.