🤖 AI Summary
This paper addresses the theoretical quantification of classifier utility under Local Differential Privacy (LDP). Confronting the challenge that LDP perturbation degrades model performance—particularly for black-box classifiers whose behavior is analytically intractable—the authors propose a novel analytical paradigm: formally linking the distributional concentration properties of LDP mechanisms to the local robustness of classifiers. This yields the first general utility analysis framework applicable to arbitrary LDP mechanisms and black-box classifiers. Two key technical innovations are introduced: (i) refined characterization of distributional concentration under LDP, and (ii) black-box function modeling grounded in robustness theory. Together, they enable predictive, theoretically grounded utility bounds. Experiments demonstrate high accuracy of the derived bounds in low-dimensional settings and reveal that piecewise LDP mechanisms consistently achieve superior utility on canonical classification tasks.
📝 Abstract
Local differential privacy (LDP) provides a rigorous and quantifiable privacy guarantee for personal data by introducing perturbation at the data source. However, quantifying the impact of these perturbations on classifier utility remains a theoretical challenge, particularly for complex or black-box classifiers.
This paper presents a framework for theoretically quantifying classifier utility under LDP mechanisms. The key insight is that LDP perturbation is concentrated around the original data with a specific probability, transforming utility analysis of the classifier into its robustness analysis in this concentrated region. Our framework connects the concentration analysis of LDP mechanisms with the robustness analysis of classifiers. It treats LDP mechanisms as general distributional functions and classifiers as black-box functions, thus applicable to any LDP mechanism and classifier. A direct application of our utility quantification is guiding the selection of LDP mechanisms and privacy parameters for a given classifier. Notably, our analysis shows that a piecewise-based mechanism leads to better utility compared to alternatives in common scenarios.
Using this framework alongside two novel refinement techniques, we conduct case studies on utility quantification for typical mechanism-classifier combinations. The results demonstrate that our theoretical utility quantification aligns closely with empirical observations, particularly when classifiers operate in lower-dimensional input spaces.