A Dual Perspective on Decision-Focused Learning: Scalable Training via Dual-Guided Surrogates

๐Ÿ“… 2025-11-07
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Decision-focused learning methods in the predict-then-optimize paradigm suffer from poor scalability and heavy reliance on repeated calls to combinatorial optimization solvers. Method: This paper proposes a dual-perspective learning framework that incorporates dual variables to guide prediction model training, introducing a differentiable Dual-Guided Loss (DGL) grounded in duality theory. DGL decouples optimization from gradient updates, enabling decision alignment while drastically reducing solver invocations. We further design a periodic-solving strategy coupled with a dual-adjustment objective to accommodate canonical combinatorial selection tasksโ€”including matching, knapsack, and shortest path problems. Results: Experiments on two benchmark task families demonstrate that our method matches or surpasses state-of-the-art decision-focused approaches in solution quality, while significantly accelerating training, reducing solver calls, and ensuring asymptotic convergence of decision regret.

Technology Category

Application Category

๐Ÿ“ Abstract
Many real-world decisions are made under uncertainty by solving optimization problems using predicted quantities. This predict-then-optimize paradigm has motivated decision-focused learning, which trains models with awareness of how the optimizer uses predictions, improving the performance of downstream decisions. Despite its promise, scaling is challenging: state-of-the-art methods either differentiate through a solver or rely on task-specific surrogates, both of which require frequent and expensive calls to an optimizer, often a combinatorial one. In this paper, we leverage dual variables from the downstream problem to shape learning and introduce Dual-Guided Loss (DGL), a simple, scalable objective that preserves decision alignment while reducing solver dependence. We construct DGL specifically for combinatorial selection problems with natural one-of-many constraints, such as matching, knapsack, and shortest path. Our approach (a) decouples optimization from gradient updates by solving the downstream problem only periodically; (b) between refreshes, trains on dual-adjusted targets using simple differentiable surrogate losses; and (c) as refreshes become less frequent, drives training cost toward standard supervised learning while retaining strong decision alignment. We prove that DGL has asymptotically diminishing decision regret, analyze runtime complexity, and show on two problem classes that DGL matches or exceeds state-of-the-art DFL methods while using far fewer solver calls and substantially less training time. Code is available at https://github.com/paularodr/Dual-Guided-Learning.
Problem

Research questions and friction points this paper is trying to address.

Scaling decision-focused learning by reducing expensive solver dependency
Developing dual-guided surrogates for combinatorial optimization problems
Maintaining decision alignment while approaching supervised learning efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses dual variables to guide learning process
Decouples optimization from gradient updates periodically
Employs dual-adjusted targets with simple surrogate losses
๐Ÿ”Ž Similar Papers
No similar papers found.