Provable Robust Overfitting Mitigation in Wasserstein Distributionally Robust Optimization

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Wasserstein distributionally robust optimization (WDRO) suffers from robust overfitting due to neglecting statistical estimation error in uncertainty set construction. Method: We propose a novel uncertainty set that jointly incorporates the Wasserstein distance and KL divergence—first unifying modeling of both distributional shift and statistical uncertainty. Based on this set, we formulate a robust optimization framework with probabilistic guarantees, derive provable robust generalization bounds, and characterize conditions under which its solution corresponds to a Stackelberg equilibrium. Results: We theoretically establish that, with high probability, the robust test performance is at least as good as the statistically robust training loss. Empirical evaluation across multiple benchmarks demonstrates substantial mitigation of robust overfitting, improved out-of-distribution adversarial accuracy, and consistent alignment between theoretical rigor and empirical effectiveness.

Technology Category

Application Category

📝 Abstract
Wasserstein distributionally robust optimization (WDRO) optimizes against worst-case distributional shifts within a specified uncertainty set, leading to enhanced generalization on unseen adversarial examples, compared to standard adversarial training which focuses on pointwise adversarial perturbations. However, WDRO still suffers fundamentally from the robust overfitting problem, as it does not consider statistical error. We address this gap by proposing a novel robust optimization framework under a new uncertainty set for adversarial noise via Wasserstein distance and statistical error via Kullback-Leibler divergence, called the Statistically Robust WDRO. We establish a robust generalization bound for the new optimization framework, implying that out-of-distribution adversarial performance is at least as good as the statistically robust training loss with high probability. Furthermore, we derive conditions under which Stackelberg and Nash equilibria exist between the learner and the adversary, giving an optimal robust model in certain sense. Finally, through extensive experiments, we demonstrate that our method significantly mitigates robust overfitting and enhances robustness within the framework of WDRO.
Problem

Research questions and friction points this paper is trying to address.

Mitigates robust overfitting in Wasserstein distributionally robust optimization.
Proposes a new framework combining Wasserstein distance and Kullback-Leibler divergence.
Establishes robust generalization bounds and optimal adversarial equilibria.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines Wasserstein distance with Kullback-Leibler divergence
Introduces Statistically Robust WDRO framework
Establishes robust generalization bound
🔎 Similar Papers
No similar papers found.
S
Shuang Liu
State Key Laboratory of Mathematical Sciences, Academy of Mathematics and Systems Science, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Beijing 100049, China
Y
Yihan Wang
State Key Laboratory of Mathematical Sciences, Academy of Mathematics and Systems Science, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Beijing 100049, China
Yifan Zhu
Yifan Zhu
Beijing University of Posts and Telecommunications
PEFT of LLMsGraph RAGGraph mining
Yibo Miao
Yibo Miao
Shanghai Jiao Tong University; Moonshot
Deep LearningNatural Language ProcessingLarge Language Models
Xiao-Shan Gao
Xiao-Shan Gao
AMSS, CAS
Automated ReasoningSymbolic ComputationMachine Learning Theory