🤖 AI Summary
This work challenges the widely held belief that k-anonymity provides inherent privacy protection in the absence of auxiliary information. Focusing on prevalent locally recoded k-anonymous datasets, we propose the Combinatorial Refinement Attack (CRA)—a novel privacy attack that requires no external auxiliary information and makes no assumptions about underlying data distributions. CRA formulates a linear programming model that exploits the utility-optimization property of local recoding to systematically narrow the feasible value space of sensitive attributes, enabling high-accuracy re-identification. Experiments on real clinical microdata demonstrate that, even under complete absence of background knowledge, existing k-anonymous releases fall significantly short of their promised privacy guarantees. To our knowledge, this is the first work to expose a fundamental, auxiliary-information-free vulnerability in k-anonymity, thereby undermining its foundational claim as a robust privacy standard.
📝 Abstract
Despite longstanding criticism from the privacy community, k-anonymity remains a widely used standard for data anonymization, mainly due to its simplicity, regulatory alignment, and preservation of data utility. However, non-experts often defend k-anonymity on the grounds that, in the absence of auxiliary information, no known attacks can compromise its protections. In this work, we refute this claim by introducing Combinatorial Refinement Attacks (CRA), a new class of privacy attacks targeting k-anonymized datasets produced using local recoding. This is the first method that does not rely on external auxiliary information or assumptions about the underlying data distribution. CRA leverages the utility-optimizing behavior of local recoding anonymization of ARX, which is a widely used open-source software for anonymizing data in clinical settings, to formulate a linear program that significantly reduces the space of plausible sensitive values. To validate our findings, we partnered with a network of free community health clinics, an environment where (1) auxiliary information is indeed hard to find due to the population they serve and (2) open-source k-anonymity solutions are attractive due to regulatory obligations and limited resources. Our results on real-world clinical microdata reveal that even in the absence of external information, established anonymization frameworks do not deliver the promised level of privacy, raising critical privacy concerns.