๐ค AI Summary
This paper addresses the intractable optimization of exponentially large, inseparable combinatorial action spaces in coupled restless multi-armed bandits (coRMAB). We propose an end-to-end deep reinforcement learning and combinatorial optimization framework. Methodologically, we introduce the first integration of a deep Q-network into a mixed-integer linear programming (MILP) solver, enabling long-horizon reward-driven joint action selection; we also formally define the coRMAB paradigm, supporting strong coupling constraints such as multi-intervention effects and path dependence. Our approach synergistically combines neural embedding with combinatorial modeling. Evaluated on four novel constrained restless bandit benchmarks, it achieves an average 26.4% improvement in cumulative reward over state-of-the-art baselinesโnone of which simultaneously handle sequential dependencies and combinatorial structure. The work establishes a principled bridge between deep RL and exact combinatorial optimization for highly coupled sequential decision-making under nonstationarity.
๐ Abstract
Reinforcement learning (RL) has increasingly been applied to solve real-world planning problems, with progress in handling large state spaces and time horizons. However, a key bottleneck in many domains is that RL methods cannot accommodate large, combinatorially structured action spaces. In such settings, even representing the set of feasible actions at a single step may require a complex discrete optimization formulation. We leverage recent advances in embedding trained neural networks into optimization problems to propose SEQUOIA, an RL algorithm that directly optimizes for long-term reward over the feasible action space. Our approach embeds a Q-network into a mixed-integer program to select a combinatorial action in each timestep. Here, we focus on planning over restless bandits, a class of planning problems which capture many real-world examples of sequential decision making. We introduce coRMAB, a broader class of restless bandits with combinatorial actions that cannot be decoupled across the arms of the restless bandit, requiring direct solving over the joint, exponentially large action space. We empirically validate SEQUOIA on four novel restless bandit problems with combinatorial constraints: multiple interventions, path constraints, bipartite matching, and capacity constraints. Our approach significantly outperforms existing methods -- which cannot address sequential planning and combinatorial selection simultaneously -- by an average of 26.4% on these difficult instances.