Overfitting in Adaptive Robust Optimization

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies an “overfitting” phenomenon in adaptive robust optimization (ARO): when adaptive decisions depend on uncertainty realizations, policies often fail outside the nominal uncertainty set—mirroring generalization failure in machine learning. To address this, we propose a regularization-inspired approach: a hierarchical uncertainty set structure that dynamically allocates set sizes according to constraint importance, thereby providing stronger probabilistic feasibility guarantees for critical constraints. Our method integrates adaptive robust modeling, probabilistic feasibility analysis, and constraint decoupling techniques. Theoretically, we establish a novel analytical framework for mitigating adaptive overfitting. Empirically, the proposed mechanism significantly improves out-of-distribution feasibility and solution stability, achieving a superior trade-off between robustness and adaptivity.

Technology Category

Application Category

📝 Abstract
Adaptive robust optimization (ARO) extends static robust optimization by allowing decisions to depend on the realized uncertainty - weakly dominating static solutions within the modeled uncertainty set. However, ARO makes previous constraints that were independent of uncertainty now dependent, making it vulnerable to additional infeasibilities when realizations fall outside the uncertainty set. This phenomenon of adaptive policies being brittle is analogous to overfitting in machine learning. To mitigate against this, we propose assigning constraint-specific uncertainty set sizes, with harder constraints given stronger probabilistic guarantees. Interpreted through the overfitting lens, this acts as regularization: tighter guarantees shrink adaptive coefficients to ensure stability, while looser ones preserve useful flexibility. This view motivates a principled approach to designing uncertainty sets that balances robustness and adaptivity.
Problem

Research questions and friction points this paper is trying to address.

Addresses overfitting in adaptive robust optimization
Mitigates brittleness when uncertainty exceeds modeled sets
Proposes constraint-specific uncertainty regularization for stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constraint-specific uncertainty set sizing
Tighter guarantees shrink adaptive coefficients
Balancing robustness and adaptivity via regularization
K
Karl Zhu
Operations Research Center, Massachusetts Institute of Technology, Cambridge, MA, USA
Dimitris Bertsimas
Dimitris Bertsimas
Boeing Professor of Operations Research, MIT
Operations ResearchOptimizationStochasticsAnalyticsHealth Care