Finding Differentially Private Second Order Stationary Points in Stochastic Minimax Optimization

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of efficiently finding differentially private second-order stationary points (SOSP) in stochastic nonconvex minimax optimization. To this end, the authors propose a purely first-order algorithm that integrates nested gradient descent-ascent, SPIDER variance reduction, and Gaussian perturbation, along with a novel q-periodic block analysis to control the accumulation of privacy noise. The method is the first to provide a unified framework for achieving private SOSP under both empirical and population risk settings. Under standard smoothness and strong concavity assumptions, the algorithm attains an approximate SOSP with high probability, achieving accuracy of $O((\sqrt{d}/(n\varepsilon))^{2/3})$ for empirical risk and $O(1/n^{1/3} + (\sqrt{d}/(n\varepsilon))^{1/2})$ for population risk—matching the current best-known first-order rates for private nonconvex optimization.

Technology Category

Application Category

📝 Abstract
We provide the first study of the problem of finding differentially private (DP) second-order stationary points (SOSP) in stochastic (non-convex) minimax optimization. Existing literature either focuses only on first-order stationary points for minimax problems or on SOSP for classical stochastic minimization problems. This work provides, for the first time, a unified and detailed treatment of both empirical and population risks. Specifically, we propose a purely first-order method that combines a nested gradient descent--ascent scheme with SPIDER-style variance reduction and Gaussian perturbations to ensure privacy. A key technical device is a block-wise ($q$-period) analysis that controls the accumulation of stochastic variance and privacy noise without summing over the full iteration horizon, yielding a unified treatment of both empirical-risk and population formulations. Under standard smoothness, Hessian-Lipschitzness, and strong concavity assumptions, we establish high-probability guarantees for reaching an $(\alpha,\sqrt{\rho_\Phi \alpha})$-approximate second-order stationary point with $\alpha = \mathcal{O}( (\frac{\sqrt{d}}{n\varepsilon})^{2/3})$ for empirical risk objectives and $\mathcal{O}(\frac{1}{n^{1/3}} + (\frac{\sqrt{d}}{n\varepsilon})^{1/2})$ for population objectives, matching the best known rates for private first-order stationarity.
Problem

Research questions and friction points this paper is trying to address.

differentially private
second-order stationary points
stochastic minimax optimization
non-convex optimization
empirical and population risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

differentially private optimization
second-order stationary points
stochastic minimax optimization
variance reduction
privacy-preserving machine learning
🔎 Similar Papers
D
Difei Xu
King Abdullah University of Science and Technology, Provable Responsible AI and Data Analytics (PRADA) Lab
Y
Youming Tao
Provable Responsible AI and Data Analytics (PRADA) Lab, Technische Universit¨at Berlin
Meng Ding
Meng Ding
University at Buffalo
TrustworthyStatistical Learning
C
Chenglin Fan
Provable Responsible AI and Data Analytics (PRADA) Lab, Seoul National University
Di Wang
Di Wang
King Abdullah University of Science and Technology
Differential PrivacyMachine UnlearningKnowledge Editing