🤖 AI Summary
Traditional continuous deep reinforcement learning policies—such as deterministic or unimodal Gaussian policies—struggle to represent multimodal decision distributions, limiting robustness in diversity-critical tasks. This work proposes a tractable, end-to-end policy gradient-optimizable *intractable multimodal policy* framework. We provide the first theoretical derivation and practical implementation of reparameterization for such policies. Departing from density estimation, we introduce a sample-distance-based diversity regularization that jointly enhances expressivity, robustness, and computational efficiency. The framework unifies diffusion-based and amortized multimodal modeling paradigms. Empirically, it significantly improves few-shot generalization in multi-goal navigation and generative RL tasks, and achieves state-of-the-art performance on the MuJoCo benchmark—demonstrating the efficacy of amortized policies for efficiently capturing complex behavioral distributions.
📝 Abstract
Traditional continuous deep reinforcement learning (RL) algorithms employ deterministic or unimodal Gaussian actors, which cannot express complex multimodal decision distributions. This limitation can hinder their performance in diversity-critical scenarios. There have been some attempts to design online multimodal RL algorithms based on diffusion or amortized actors. However, these actors are intractable, making existing methods struggle with balancing performance, decision diversity, and efficiency simultaneously. To overcome this challenge, we first reformulate existing intractable multimodal actors within a unified framework, and prove that they can be directly optimized by policy gradient via reparameterization. Then, we propose a distance-based diversity regularization that does not explicitly require decision probabilities. We identify two diversity-critical domains, namely multi-goal achieving and generative RL, to demonstrate the advantages of multimodal policies and our method, particularly in terms of few-shot robustness. In conventional MuJoCo benchmarks, our algorithm also shows competitive performance. Moreover, our experiments highlight that the amortized actor is a promising policy model class with strong multimodal expressivity and high performance. Our code is available at https://github.com/PneuC/DrAC