Drago: Primal-Dual Coupled Variance Reduction for Faster Distributionally Robust Optimization

📅 2024-03-16
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
This paper studies penalty-based distributionally robust optimization (DRO) with a closed convex uncertainty set, encompassing canonical settings such as $f$-DRO and spectral/$L$-risk minimization. Exploiting the problem’s strongly convex–strongly concave structure, we propose a cyclic–stochastic hybrid sampling scheme, coupled with regularized primal updates and dual variance reduction. This yields the first linearly convergent algorithm whose convergence rate depends *finely* on both primal and dual condition numbers. Theoretical analysis establishes that our method achieves the current state-of-the-art linear convergence rate. Numerical experiments on regression and classification tasks demonstrate significant improvements over existing baseline methods. Our core contributions lie in the synergistic integration of hybrid sampling design, variance reduction, and condition-number-sensitive analysis—establishing a new paradigm for high-accuracy, high-efficiency DRO optimization.

Technology Category

Application Category

📝 Abstract
We consider the penalized distributionally robust optimization (DRO) problem with a closed, convex uncertainty set, a setting that encompasses learning using $f$-DRO and spectral/$L$-risk minimization. We present Drago, a stochastic primal-dual algorithm that combines cyclic and randomized components with a carefully regularized primal update to achieve dual variance reduction. Owing to its design, Drago enjoys a state-of-the-art linear convergence rate on strongly convex-strongly concave DRO problems with a fine-grained dependency on primal and dual condition numbers. Theoretical results are supported by numerical benchmarks on regression and classification tasks.
Problem

Research questions and friction points this paper is trying to address.

Primal-dual variance reduction algorithm
Faster distributionally robust optimization
Linear convergence for convex-concave problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Primal-dual stochastic algorithm
Cyclic and randomized components
Regularized primal update for variance reduction
🔎 Similar Papers
No similar papers found.
R
Ronak Mehta
University of Washington, Seattle
Jelena Diakonikolas
Jelena Diakonikolas
UW-Madison
Optimizationalgorithmsmachine learning
Z
Zaïd Harchaoui
University of Washington, Seattle