🤖 AI Summary
This paper identifies and formalizes, for the first time, “hash-induced unfairness” in Local Differential Privacy (LDP): even under identical protocols and privacy budgets, different hash functions induce substantial disparities in per-user security strength, exacerbating unequal vulnerability to inference and poisoning attacks. To address this, we propose an entropy-constrained fair hash selection mechanism and design Fair-OLH—a novel LDP protocol that jointly optimizes perturbation and encoding at the user side. Experiments demonstrate that Fair-OLH significantly reduces inter-user variance in attack success rates (average reduction of 42.6%) with acceptable computational overhead, thereby enhancing both fairness and robustness of LDP systems. Our work establishes a new paradigm for security evaluation and protocol design in LDP, shifting focus from aggregate privacy guarantees to equitable protection across users.
📝 Abstract
Local differential privacy (LDP) has become a widely accepted framework for privacy-preserving data collection. In LDP, many protocols rely on hash functions to implement user-side encoding and perturbation. However, the security and privacy implications of hash function selection have not been previously investigated. In this paper, we expose that the hash functions may act as a source of unfairness in LDP protocols. We show that although users operate under the same protocol and privacy budget, differences in hash functions can lead to significant disparities in vulnerability to inference and poisoning attacks. To mitigate hash-induced unfairness, we propose Fair-OLH (F-OLH), a variant of OLH that enforces an entropy-based fairness constraint on hash function selection. Experiments show that F-OLH is effective in mitigating hash-induced unfairness under acceptable time overheads.