From Sequential to Parallel: Reformulating Dynamic Programming as GPU Kernels for Large-Scale Stochastic Combinatorial Optimization

๐Ÿ“… 2026-02-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the computational bottleneck in scenario-based Sample Average Approximation (SAA) arising from solving NP-hard two-stage integer subproblems by proposing the first GPU-parallel framework capable of supporting full-fidelity integer second-stage models. By reformulating dynamic programming as customized CUDA kernels, the approach enables fine-grained parallelism across scenarios, dynamic programming layers, and action dimensions, complemented by a hardware-aware batching strategy that performs Bellman updates for over one million scenarios in a single pass. Evaluated on stochastic vehicle routing and inventory reinsertion problems, the method achieves speedups of 10ยฒโ€“10โตร— and demonstrates near-linear scalability to million-scenario SAA instances, substantially improving both solution quality and scalability.

Technology Category

Application Category

๐Ÿ“ Abstract
A major bottleneck in scenario-based Sample Average Approximation (SAA) for stochastic programming (SP) is the cost of solving an exact second-stage problem for every scenario, especially when each scenario contains an NP-hard combinatorial structure. This has led much of the SP literature to restrict the second stage to linear or simplified models. We develop a GPU-based framework that makes full-fidelity integer second-stage models tractable at scale. The key innovation is a set of hardware-aware, scenario-batched GPU kernels that expose parallelism across scenarios, dynamic-programming (DP) layers, and route or action options, enabling Bellman updates to be executed in a single pass over more than 1,000,000 realizations. We evaluate the approach in two representative SP settings: a vectorized split operator for stochastic vehicle routing and a DP for inventory reinsertion. Implementation scales nearly linearly in the number of scenarios and achieves a one-two to four-five orders of magnitude speedup, allowing far larger scenario sets and reliably stronger first-stage decisions. The computational leverage directly improves decision quality: much larger scenario sets and many more first-stage candidates can be evaluated within fixed time budgets, consistently yielding stronger SAA solutions. Our results show that full-fidelity integer second-stage models are tractable at scales previously considered impossible, providing a practical path to large-scale, realistic stochastic discrete optimization.
Problem

Research questions and friction points this paper is trying to address.

stochastic programming
Sample Average Approximation
combinatorial optimization
second-stage problem
NP-hard
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU acceleration
dynamic programming
stochastic combinatorial optimization
scenario batching
integer second-stage models
๐Ÿ”Ž Similar Papers
No similar papers found.
Jingyi Zhao
Jingyi Zhao
Shenzhen Research Institute of Big Data
Inventory RoutingStochastic ProgrammingLearning to OptimizeMeta-heuristic
L
Linxin Yang
Shenzhen Research Institute of Big Data, Shenzhen, China; School of Data Science, The Chinese University of Hong Kong, Shenzhen, China
H
Haohua Zhang
AutoKernel, Shenzhen, China
Tian Ding
Tian Ding
Shenzhen Research Institute of Big Data