🤖 AI Summary
Multi-objective optimization under differential privacy remains challenging, particularly for sensitive data scenarios where conventional single-objective privacy mechanisms fail to balance competing utility objectives.
Method: This paper proposes two ε-differentially private mechanisms—PrivPareto, which identifies the Pareto frontier via a novel Pareto scoring mechanism, and PrivAgg, which performs privacy-preserving weighted aggregation. We establish the first theoretical framework for composing local sensitivity of multi-objective utility functions, enabling rigorous privacy accounting beyond global sensitivity bounds.
Contribution/Results: Evaluated on real-world tasks—influence maximization in social networks and cost-sensitive decision tree learning—our methods consistently outperform global-sensitivity baselines across ε ∈ [0.01, 1], delivering stable utility gains and practical deployability. The approach unifies multi-objective cooperative optimization with strict (ε, 0)-differential privacy guarantees, overcoming fundamental limitations of prior single-objective selection strategies.
📝 Abstract
Differentially private selection mechanisms are fundamental building blocks for privacy-preserving data analysis. While numerous mechanisms exist for single-objective selection, many real-world applications require optimizing multiple competing objectives simultaneously. We present two novel mechanisms for differentially private multi-objective selection: PrivPareto and PrivAgg. PrivPareto uses a novel Pareto score to identify solutions near the Pareto frontier, while PrivAgg enables privacy-preserving weighted aggregation of multiple objectives. Both mechanisms support global and local sensitivity approaches, with comprehensive theoretical analysis showing how to compose sensitivities of multiple utility functions. We demonstrate the practical applicability through two real-world applications: cost-sensitive decision tree construction and multi-objective influential node selection in social networks. The experimental results showed that our local sensitivity-based approaches achieve significantly better utility compared to global sensitivity approaches across both applications and both Pareto and Aggregation approaches. Moreover, the local sensitivity-based approaches are able to perform well with typical privacy budget values $epsilon in [0.01, 1]$ in most experiments.