Exposing Privacy Risks in Anonymizing Clinical Data: Combinatorial Refinement Attacks on k-Anonymity Without Auxiliary Information

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the widely held belief that k-anonymity provides inherent privacy protection in the absence of auxiliary information. Focusing on prevalent locally recoded k-anonymous datasets, we propose the Combinatorial Refinement Attack (CRA)—a novel privacy attack that requires no external auxiliary information and makes no assumptions about underlying data distributions. CRA formulates a linear programming model that exploits the utility-optimization property of local recoding to systematically narrow the feasible value space of sensitive attributes, enabling high-accuracy re-identification. Experiments on real clinical microdata demonstrate that, even under complete absence of background knowledge, existing k-anonymous releases fall significantly short of their promised privacy guarantees. To our knowledge, this is the first work to expose a fundamental, auxiliary-information-free vulnerability in k-anonymity, thereby undermining its foundational claim as a robust privacy standard.

Technology Category

Application Category

📝 Abstract
Despite longstanding criticism from the privacy community, k-anonymity remains a widely used standard for data anonymization, mainly due to its simplicity, regulatory alignment, and preservation of data utility. However, non-experts often defend k-anonymity on the grounds that, in the absence of auxiliary information, no known attacks can compromise its protections. In this work, we refute this claim by introducing Combinatorial Refinement Attacks (CRA), a new class of privacy attacks targeting k-anonymized datasets produced using local recoding. This is the first method that does not rely on external auxiliary information or assumptions about the underlying data distribution. CRA leverages the utility-optimizing behavior of local recoding anonymization of ARX, which is a widely used open-source software for anonymizing data in clinical settings, to formulate a linear program that significantly reduces the space of plausible sensitive values. To validate our findings, we partnered with a network of free community health clinics, an environment where (1) auxiliary information is indeed hard to find due to the population they serve and (2) open-source k-anonymity solutions are attractive due to regulatory obligations and limited resources. Our results on real-world clinical microdata reveal that even in the absence of external information, established anonymization frameworks do not deliver the promised level of privacy, raising critical privacy concerns.
Problem

Research questions and friction points this paper is trying to address.

Attacking k-anonymity without external auxiliary information
Revealing privacy risks in clinical data anonymization
Formulating combinatorial refinement attacks on local recoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combinatorial Refinement Attacks without external information
Linear program reduces plausible sensitive values space
Targets local recoding in ARX open-source software
🔎 Similar Papers
No similar papers found.
S
Somiya Chhillar
George Mason University, Fairfax, VA, USA
M
Mary K. Righi
MAP Clinics, Fairfax, VA, USA
R
Rebecca E. Sutter
George Mason University, Fairfax, VA, USA
Evgenios M. Kornaropoulos
Evgenios M. Kornaropoulos
George Mason University
Computer SecurityML SecurityPrivacyAlgorithms