Latent Spherical Flow Policy for Reinforcement Learning with Combinatorial Actions

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of policy parameterization in combinatorial action spaces, where exponential growth of the action set and complex feasibility constraints hinder effective learning. To overcome this, the authors propose Latent Spherical Flow Policy (LSFlow), which learns a stochastic policy by matching spherical flows in a compact continuous latent space and maps latent samples to valid structured actions via a combinatorial optimization solver. Innovatively integrating modern generative modeling into combinatorial reinforcement learning, LSFlow introduces a latent-space value network and a smoothed Bellman operator to mitigate the discontinuities in the value function induced by the solver. Experimental results demonstrate that LSFlow outperforms state-of-the-art baselines by an average of 20.6% across multiple challenging tasks.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) with combinatorial action spaces remains challenging because feasible action sets are exponentially large and governed by complex feasibility constraints, making direct policy parameterization impractical. Existing approaches embed task-specific value functions into constrained optimization programs or learn deterministic structured policies, sacrificing generality and policy expressiveness. We propose a solver-induced \emph{latent spherical flow policy} that brings the expressiveness of modern generative policies to combinatorial RL while guaranteeing feasibility by design. Our method, LSFlow, learns a \emph{stochastic} policy in a compact continuous latent space via spherical flow matching, and delegates feasibility to a combinatorial optimization solver that maps each latent sample to a valid structured action. To improve efficiency, we train the value network directly in the latent space, avoiding repeated solver calls during policy optimization. To address the piecewise-constant and discontinuous value landscape induced by solver-based action selection, we introduce a smoothed Bellman operator that yields stable, well-defined learning targets. Empirically, our approach outperforms state-of-the-art baselines by an average of 20.6\% across a range of challenging combinatorial RL tasks.
Problem

Research questions and friction points this paper is trying to address.

combinatorial action spaces
reinforcement learning
feasibility constraints
policy parameterization
structured actions
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent spherical flow
combinatorial reinforcement learning
stochastic policy
solver-induced feasibility
smoothed Bellman operator
🔎 Similar Papers
No similar papers found.