🤖 AI Summary
This work addresses the challenges faced by existing generative-model-based offline reinforcement learning methods in handling discrete action spaces and multi-objective settings. It introduces the first extension of the flow matching framework to discrete actions by modeling policy generation as a continuous-time Markov chain and training with a Q-weighted flow matching objective. To mitigate the exponential growth of the joint action space, the authors propose a factorized conditional path modeling approach. Theoretical analysis demonstrates that, under ideal conditions, the method recovers the optimal policy. Empirical results validate its robustness across complex offline RL tasks involving high-dimensional control, multimodal decision-making, and dynamic preferences, while also showing that action quantization enables flexible adaptation to continuous control scenarios.
📝 Abstract
Generative policies based on diffusion models and flow matching have shown strong promise for offline reinforcement learning (RL), but their applicability remains largely confined to continuous action spaces. To address a broader range of offline RL settings, we extend flow matching to a general framework that supports discrete action spaces with multiple objectives. Specifically, we replace continuous flows with continuous-time Markov chains, trained using a Q-weighted flow matching objective. We then extend our design to multi-agent settings, mitigating the exponential growth of joint action spaces via a factorized conditional path. We theoretically show that, under idealized conditions, optimizing this objective recovers the optimal policy. Extensive experiments further demonstrate that our method performs robustly in practical scenarios, including high-dimensional control, multi-modal decision-making, and dynamically changing preferences over multiple objectives. Our discrete framework can also be applied to continuous-control problems through action quantization, providing a flexible trade-off between representational complexity and performance.