🤖 AI Summary
This paper studies statistical estimation under local differential privacy (LDP) with heterogeneous privacy requirements—where users specify distinct privacy budgets—focusing on one- and multi-dimensional mean estimation and discrete distribution learning under the ℓ∞-distance, with high-probability error bounds (rather than expectation-based guarantees). Methodologically, it integrates customized privacy mechanism design, concentration inequalities, and information-theoretic lower bound analysis. The work establishes the first finite-sample upper bounds for heterogeneous LDP with explicit high-probability guarantees, and provides matching minimax lower bounds, thereby rigorously proving statistical optimality. Specifically, it achieves optimal high-probability ℓ₂-error bounds for mean estimation and high-probability ℓ∞-convergence for distribution learning. These results furnish a theoretical foundation and design principles for personalized LDP mechanisms.
📝 Abstract
We study statistical estimation under local differential privacy (LDP) when users may hold heterogeneous privacy levels and accuracy must be guaranteed with high probability. Departing from the common in-expectation analyses, and for one-dimensional and multi-dimensional mean estimation problems, we develop finite sample upper bounds in $ell_2$-norm that hold with probability at least $1-β$. We complement these results with matching minimax lower bounds, establishing the optimality (up to constants) of our guarantees in the heterogeneous LDP regime. We further study distribution learning in $ell_infty$-distance, designing an algorithm with high-probability guarantees under heterogeneous privacy demands. Our techniques offer principled guidance for designing mechanisms in settings with user-specific privacy levels.