๐ค AI Summary
This work addresses the computational bottleneck in scenario-based Sample Average Approximation (SAA) arising from solving NP-hard two-stage integer subproblems by proposing the first GPU-parallel framework capable of supporting full-fidelity integer second-stage models. By reformulating dynamic programming as customized CUDA kernels, the approach enables fine-grained parallelism across scenarios, dynamic programming layers, and action dimensions, complemented by a hardware-aware batching strategy that performs Bellman updates for over one million scenarios in a single pass. Evaluated on stochastic vehicle routing and inventory reinsertion problems, the method achieves speedups of 10ยฒโ10โตร and demonstrates near-linear scalability to million-scenario SAA instances, substantially improving both solution quality and scalability.
๐ Abstract
A major bottleneck in scenario-based Sample Average Approximation (SAA) for stochastic programming (SP) is the cost of solving an exact second-stage problem for every scenario, especially when each scenario contains an NP-hard combinatorial structure. This has led much of the SP literature to restrict the second stage to linear or simplified models. We develop a GPU-based framework that makes full-fidelity integer second-stage models tractable at scale. The key innovation is a set of hardware-aware, scenario-batched GPU kernels that expose parallelism across scenarios, dynamic-programming (DP) layers, and route or action options, enabling Bellman updates to be executed in a single pass over more than 1,000,000 realizations. We evaluate the approach in two representative SP settings: a vectorized split operator for stochastic vehicle routing and a DP for inventory reinsertion. Implementation scales nearly linearly in the number of scenarios and achieves a one-two to four-five orders of magnitude speedup, allowing far larger scenario sets and reliably stronger first-stage decisions. The computational leverage directly improves decision quality: much larger scenario sets and many more first-stage candidates can be evaluated within fixed time budgets, consistently yielding stronger SAA solutions. Our results show that full-fidelity integer second-stage models are tractable at scales previously considered impossible, providing a practical path to large-scale, realistic stochastic discrete optimization.