🤖 AI Summary
Reinforcement learning in hybrid discrete-continuous action spaces remains constrained by limited policy expressiveness and poor scalability to high dimensions. This work addresses the challenge by formulating it as a fully cooperative game and introducing a collaborative diffusion policy framework. The approach employs two agents—one utilizing a discrete diffusion policy and the other a continuous diffusion policy—whose actions are coordinated through conditional dependency modeling and a sequential update mechanism to prevent policy conflicts. To enhance scalability, a Q-function-guided low-dimensional discrete action codebook is designed. Evaluated across multiple benchmark tasks with hybrid action spaces, the proposed method significantly outperforms existing state-of-the-art approaches, achieving up to a 19.3% improvement in success rate.
📝 Abstract
Hybrid action space, which combines discrete choices and continuous parameters, is prevalent in domains such as robot control and game AI. However, efficiently modeling and optimizing hybrid discrete-continuous action space remains a fundamental challenge, mainly due to limited policy expressiveness and poor scalability in high-dimensional settings. To address this challenge, we view the hybrid action space problem as a fully cooperative game and propose a \textbf{Cooperative Hybrid Diffusion Policies (CHDP)} framework to solve it. CHDP employs two cooperative agents that leverage a discrete and a continuous diffusion policy, respectively. The continuous policy is conditioned on the discrete action's representation, explicitly modeling the dependency between them. This cooperative design allows the diffusion policies to leverage their expressiveness to capture complex distributions in their respective action spaces. To mitigate the update conflicts arising from simultaneous policy updates in this cooperative setting, we employ a sequential update scheme that fosters co-adaptation. Moreover, to improve scalability when learning in high-dimensional discrete action space, we construct a codebook that embeds the action space into a low-dimensional latent space. This mapping enables the discrete policy to learn in a compact, structured space. Finally, we design a Q-function-based guidance mechanism to align the codebook's embeddings with the discrete policy's representation during training. On challenging hybrid action benchmarks, CHDP outperforms the state-of-the-art method by up to $19.3\%$ in success rate.