Optimal Fairness under Local Differential Privacy

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the joint optimization of data fairness and downstream classification fairness under Local Differential Privacy (LDP). For binary and multi-valued sensitive attributes, we propose a closed-form optimal LDP mechanism design, establishing—for the first time—theoretical connections between reduction in data unfairness during privacy pre-processing and mitigation of classification unfairness. Leveraging a tractable optimization framework, we jointly minimize multiple fairness metrics (e.g., statistical parity, equal opportunity) alongside classification error. Experiments on real-world datasets demonstrate that our approach significantly outperforms existing LDP methods: under strict LDP guarantees, it improves classification fairness by 23.6% on average while incurring less than 1.2% accuracy loss relative to the non-private baseline. Moreover, it dominates state-of-the-art pre-processing and post-processing methods across the full accuracy–fairness trade-off curve.

Technology Category

Application Category

📝 Abstract
We investigate how to optimally design local differential privacy (LDP) mechanisms that reduce data unfairness and thereby improve fairness in downstream classification. We first derive a closed-form optimal mechanism for binary sensitive attributes and then develop a tractable optimization framework that yields the corresponding optimal mechanism for multi-valued attributes. As a theoretical contribution, we establish that for discrimination-accuracy optimal classifiers, reducing data unfairness necessarily leads to lower classification unfairness, thus providing a direct link between privacy-aware pre-processing and classification fairness. Empirically, we demonstrate that our approach consistently outperforms existing LDP mechanisms in reducing data unfairness across diverse datasets and fairness metrics, while maintaining accuracy close to that of non-private models. Moreover, compared with leading pre-processing and post-processing fairness methods, our mechanism achieves a more favorable accuracy-fairness trade-off while simultaneously preserving the privacy of sensitive attributes. Taken together, these results highlight LDP as a principled and effective pre-processing fairness intervention technique.
Problem

Research questions and friction points this paper is trying to address.

Designing optimal LDP mechanisms to reduce data unfairness in classification
Establishing theoretical link between privacy-aware preprocessing and classification fairness
Achieving better accuracy-fairness trade-off while preserving sensitive attribute privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal LDP mechanisms reduce data unfairness
Closed-form solution for binary sensitive attributes
Tractable optimization for multi-valued attributes
🔎 Similar Papers
No similar papers found.
H
Hrad Ghoukasian
Department of Computing and Software, McMaster University
Shahab Asoodeh
Shahab Asoodeh
McMaster University
Information Theory and StatisticsDifferential PrivacyMachine Learning