🤖 AI Summary
This paper studies penalty-based distributionally robust optimization (DRO) with a closed convex uncertainty set, encompassing canonical settings such as $f$-DRO and spectral/$L$-risk minimization. Exploiting the problem’s strongly convex–strongly concave structure, we propose a cyclic–stochastic hybrid sampling scheme, coupled with regularized primal updates and dual variance reduction. This yields the first linearly convergent algorithm whose convergence rate depends *finely* on both primal and dual condition numbers. Theoretical analysis establishes that our method achieves the current state-of-the-art linear convergence rate. Numerical experiments on regression and classification tasks demonstrate significant improvements over existing baseline methods. Our core contributions lie in the synergistic integration of hybrid sampling design, variance reduction, and condition-number-sensitive analysis—establishing a new paradigm for high-accuracy, high-efficiency DRO optimization.
📝 Abstract
We consider the penalized distributionally robust optimization (DRO) problem with a closed, convex uncertainty set, a setting that encompasses learning using $f$-DRO and spectral/$L$-risk minimization. We present Drago, a stochastic primal-dual algorithm that combines cyclic and randomized components with a carefully regularized primal update to achieve dual variance reduction. Owing to its design, Drago enjoys a state-of-the-art linear convergence rate on strongly convex-strongly concave DRO problems with a fine-grained dependency on primal and dual condition numbers. Theoretical results are supported by numerical benchmarks on regression and classification tasks.