Sufficient Decision Proxies for Decision-Focused Learning

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In decision-focused learning (DFL) for uncertain optimization, a fundamental modeling ambiguity persists—whether to adopt single-scenario point predictions or full distributional estimation. This work formally characterizes the problem regimes where each paradigm is appropriate. Method: We propose a lightweight, provably sufficient decision surrogate mechanism that bypasses full-distribution modeling and relaxes strong single-scenario assumptions, instead learning low-dimensional prediction targets sufficient to guarantee optimal decision quality. Our approach integrates convex/combinatorial optimization embedding, decision-centric loss design, and rigorous uncertainty-aware analysis. Contribution/Results: The method significantly improves decision quality on tasks with continuous and discrete uncertainties in both objective and constraints. Prediction complexity remains comparable to standard supervised learning, while outperforming conventional two-stage and end-to-end DFL baselines across diverse benchmarks.

Technology Category

Application Category

📝 Abstract
When solving optimization problems under uncertainty with contextual data, utilizing machine learning to predict the uncertain parameters is a popular and effective approach. Decision-focused learning (DFL) aims at learning a predictive model such that decision quality, instead of prediction accuracy, is maximized. Common practice here is to predict a single value for each uncertain parameter, implicitly assuming that there exists a (single-scenario) deterministic problem approximation (proxy) that is sufficient to obtain an optimal decision. Other work assumes the opposite, where the underlying distribution needs to be estimated. However, little is known about when either choice is valid. This paper investigates for the first time problem properties that justify using either assumption. Using this, we present effective decision proxies for DFL, with very limited compromise on the complexity of the learning task. We show the effectiveness of presented approaches in experiments on problems with continuous and discrete variables, as well as uncertainty in the objective function and in the constraints.
Problem

Research questions and friction points this paper is trying to address.

Investigates conditions for using single-scenario proxies versus distribution estimation in DFL.
Proposes effective decision proxies for optimization under uncertainty with minimal learning complexity.
Validates approaches on problems with continuous/discrete variables and uncertain objectives/constraints.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Investigates problem properties for decision proxies
Presents effective decision proxies for DFL
Tests approaches on continuous and discrete problems
🔎 Similar Papers
No similar papers found.