🤖 AI Summary
This paper addresses distributionally robust optimization (DRO): seeking decisions that remain optimal under the worst-case distribution within an ambiguity set—defined either by Wasserstein distance or φ-divergence—when the true data-generating distribution is unknown. Methodologically, it establishes, for the first time, systematic equivalences between DRO and key machine learning paradigms, including regularization and adversarial training, thereby unifying statistical learning, operations research, and control theory into a coherent theoretical framework. The approach integrates ambiguity set construction, min-max expected loss optimization, duality analysis, and rigorous robustness verification, balancing theoretical interpretability with computational tractability. The resulting methodology significantly enhances model generalization and decision robustness under distributional shifts. It has been successfully deployed in high-stakes domains including financial risk management, medical diagnosis, and AI safety.
📝 Abstract
Distributionally robust optimization (DRO) studies decision problems under uncertainty where the probability distribution governing the uncertain problem parameters is itself uncertain. A key component of any DRO model is its ambiguity set, that is, a family of probability distributions consistent with any available structural or statistical information. DRO seeks decisions that perform best under the worst distribution in the ambiguity set. This worst case criterion is supported by findings in psychology and neuroscience, which indicate that many decision-makers have a low tolerance for distributional ambiguity. DRO is rooted in statistics, operations research and control theory, and recent research has uncovered its deep connections to regularization techniques and adversarial training in machine learning. This survey presents the key findings of the field in a unified and self-contained manner.