Differentially Private Non-convex Distributionally Robust Optimization

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation of traditional empirical risk minimization under distributional shifts, group imbalance, and adversarial perturbations, particularly when subject to differential privacy (DP) constraints. It presents the first systematic study of non-convex DP distributionally robust optimization (DP-DRO), formulating the problem via ψ-divergence-based uncertainty sets and recasting it as a min-min or composite optimization problem. The authors propose two novel algorithms—DP Double-Spider and DP Recursive-Spider—that integrate recursive variance reduction with privacy-preserving mechanisms. Under general ψ-divergences, these methods achieve a utility bound of $O(1/\sqrt{n} + (\sqrt{d\log(1/\delta)}/(n\varepsilon))^{2/3})$ in terms of gradient norm; for the KL divergence case, this improves to $O((\sqrt{d\log(1/\delta)}/(n\varepsilon))^{2/3})$, matching the state-of-the-art rates for non-convex DP-ERM. Empirical results demonstrate their superiority over existing DP minimax approaches.

Technology Category

Application Category

📝 Abstract
Real-world deployments routinely face distribution shifts, group imbalances, and adversarial perturbations, under which the traditional Empirical Risk Minimization (ERM) framework can degrade severely. Distributionally Robust Optimization (DRO) addresses this issue by optimizing the worst-case expected loss over an uncertainty set of distributions, offering a principled approach to robustness. Meanwhile, as training data in DRO always involves sensitive information, safeguarding it against leakage under Differential Privacy (DP) is essential. In contrast to classical DP-ERM, DP-DRO has received much less attention due to its minimax optimization structure with uncertainty constraint. To bridge the gap, we provide a comprehensive study of DP-(finite-sum)-DRO with $ψ$-divergence and non-convex loss. First, we study DRO with general $ψ$-divergence by reformulating it as a minimization problem, and develop a novel $(\varepsilon, δ)$-DP optimization method, called DP Double-Spider, tailored to this structure. Under mild assumptions, we show that it achieves a utility bound of $\mathcal{O}(\frac{1}{\sqrt{n}}+ (\frac{\sqrt{d \log (1/δ)}}{n \varepsilon})^{2/3})$ in terms of the gradient norm, where $n$ denotes the data size and $d$ denotes the model dimension. We further improve the utility rate for specific divergences. In particular, for DP-DRO with KL-divergence, by transforming the problem into a compositional finite-sum optimization problem, we develop a DP Recursive-Spider method and show that it achieves a utility bound of $\mathcal{O}((\frac{\sqrt{d \log(1/δ)}}{n\varepsilon})^{2/3} )$, matching the best-known result for non-convex DP-ERM. Experimentally, we demonstrate that our proposed methods outperform existing approaches for DP minimax optimization.
Problem

Research questions and friction points this paper is trying to address.

Differentially Private
Distributionally Robust Optimization
Non-convex Optimization
Distribution Shift
Privacy-Preserving Machine Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentially Private Optimization
Distributionally Robust Optimization
Non-convex Optimization
ψ-divergence
Minimax Optimization
🔎 Similar Papers
No similar papers found.
D
Difei Xu
Provable Responsible AI and Data Analytics (PRADA) Lab, King Abdullah University of Science and Technology
Meng Ding
Meng Ding
University at Buffalo
TrustworthyStatistical Learning
Z
Zebin Ma
Provable Responsible AI and Data Analytics (PRADA) Lab, King Abdullah University of Science and Technology
H
Huanyi Xie
Provable Responsible AI and Data Analytics (PRADA) Lab, King Abdullah University of Science and Technology
Y
Youming Tao
King Abdullah University of Science and Technology, Technische Universität Berlin
A
Aicha Slaitane
Provable Responsible AI and Data Analytics (PRADA) Lab, King Abdullah University of Science and Technology
Di Wang
Di Wang
King Abdullah University of Science and Technology
Differential PrivacyMachine UnlearningKnowledge Editing