🤖 AI Summary
Wasserstein distributionally robust optimization (WDRO) suffers from robust overfitting due to neglecting statistical estimation error in uncertainty set construction.
Method: We propose a novel uncertainty set that jointly incorporates the Wasserstein distance and KL divergence—first unifying modeling of both distributional shift and statistical uncertainty. Based on this set, we formulate a robust optimization framework with probabilistic guarantees, derive provable robust generalization bounds, and characterize conditions under which its solution corresponds to a Stackelberg equilibrium.
Results: We theoretically establish that, with high probability, the robust test performance is at least as good as the statistically robust training loss. Empirical evaluation across multiple benchmarks demonstrates substantial mitigation of robust overfitting, improved out-of-distribution adversarial accuracy, and consistent alignment between theoretical rigor and empirical effectiveness.
📝 Abstract
Wasserstein distributionally robust optimization (WDRO) optimizes against worst-case distributional shifts within a specified uncertainty set, leading to enhanced generalization on unseen adversarial examples, compared to standard adversarial training which focuses on pointwise adversarial perturbations. However, WDRO still suffers fundamentally from the robust overfitting problem, as it does not consider statistical error. We address this gap by proposing a novel robust optimization framework under a new uncertainty set for adversarial noise via Wasserstein distance and statistical error via Kullback-Leibler divergence, called the Statistically Robust WDRO. We establish a robust generalization bound for the new optimization framework, implying that out-of-distribution adversarial performance is at least as good as the statistically robust training loss with high probability. Furthermore, we derive conditions under which Stackelberg and Nash equilibria exist between the learner and the adversary, giving an optimal robust model in certain sense. Finally, through extensive experiments, we demonstrate that our method significantly mitigates robust overfitting and enhances robustness within the framework of WDRO.