🤖 AI Summary
To address performance degradation and algorithmic inefficiency in fairness-sensitive tasks—such as group distributionally robust optimization (Group DRO)—caused by subgroup distributional shifts, this paper proposes a novel stochastic optimization algorithm. Methodologically, it unifies Group DRO, subgroup fairness, and empirical conditional value-at-risk (CVaR) optimization via a framework integrating stochastic gradient updates, dual-variable coupling, and information-theoretic entropy regularization. Theoretically, it achieves the first near-optimal convergence rate for Group DRO and establishes a tight information-theoretic lower bound, rigorously proving algorithmic optimality. Empirically, the algorithm demonstrates faster convergence and superior robustness across multiple DRO benchmarks, consistently outperforming state-of-the-art methods. It thus bridges theoretical rigor with practical efficacy, offering both provable guarantees and strong empirical performance.
📝 Abstract
Distributionally robust optimization (DRO) can improve the robustness and fairness of learning methods. In this paper, we devise stochastic algorithms for a class of DRO problems including group DRO, subpopulation fairness, and empirical conditional value at risk (CVaR) optimization. Our new algorithms achieve faster convergence rates than existing algorithms for multiple DRO settings. We also provide a new information-theoretic lower bound that implies our bounds are tight for group DRO. Empirically, too, our algorithms outperform known methods.