🤖 AI Summary
This paper reveals a critical vulnerability of Local Differential Privacy (LDP) to poisoning attacks in ranking estimation: an adversary can deploy only a small number of sybil users to precisely manipulate item frequencies, distort ranking outcomes, and maximize personal gain. To formalize this threat, the authors propose a unified attack framework that defines attack cost and optimal target items. For three mainstream LDP protocols—k-RR, OUE, and OLH—they design multi-round iterative attack algorithms grounded in frequency perturbation analysis, hash preimage modeling, and confidence-driven optimization. Crucially, they introduce a confidence-level metric to quantify attack success probability. Theoretical analysis and extensive experiments demonstrate that the attack achieves significant rank distortion at low overhead. This work provides the first systematic characterization of the security boundaries of LDP-based ranking mechanisms, establishing foundational insights for designing robust defenses.
📝 Abstract
Local differential privacy (LDP) involves users perturbing their inputs to provide plausible deniability of their data. However, this also makes LDP vulnerable to poisoning attacks. In this paper, we first introduce novel poisoning attacks for ranking estimation. These attacks are intricate, as fake attackers do not merely adjust the frequency of target items. Instead, they leverage a limited number of fake users to precisely modify frequencies, effectively altering item rankings to maximize gains. To tackle this challenge, we introduce the concepts of attack cost and optimal attack item (set), and propose corresponding strategies for kRR, OUE, and OLH protocols. For kRR, we iteratively select optimal attack items and allocate suitable fake users. For OUE, we iteratively determine optimal attack item sets and consider the incremental changes in item frequencies across different sets. Regarding OLH, we develop a harmonic cost function based on the pre-image of a hash to select that supporting a larger number of effective attack items. Lastly, we present an attack strategy based on confidence levels to quantify the probability of a successful attack and the number of attack iterations more precisely. We demonstrate the effectiveness of our attacks through theoretical and empirical evidence, highlighting the necessity for defenses against these attacks. The source code and data have been made available at https://github.com/LDP-user/LDP-Ranking.git.