🤖 AI Summary
Traditional parameterized policies struggle to capture the multimodal nature of solutions in stochastic optimal control, while diffusion-based policies lack explicit probability densities, hindering policy gradient optimization. This work proposes the first integration of polynomial energy models into policy optimization, leveraging moment problem theory to construct a policy representation with an explicit and computable probability density. The resulting framework enables universal approximation of arbitrary distributions and facilitates exact maximum-entropy optimization. By effectively modeling complex multimodal policies, the method successfully captures non-convex manifold structures in multiple benchmark tasks and significantly outperforms existing baselines.
📝 Abstract
Stochastic Optimal Control provides a unified mathematical framework for solving complex decision-making problems, encompassing paradigms such as maximum entropy reinforcement learning(RL) and imitation learning(IL). However, conventional parametric policies often struggle to represent the multi-modality of the solutions. Though diffusion-based policies are aimed at recovering the multi-modality, they lack an explicit probability density, which complicates policy-gradient optimization. To bridge this gap, we propose MePoly, a novel policy parameterization based on polynomial energy-based models. MePoly provides an explicit, tractable probability density, enabling exact entropy maximization. Theoretically, we ground our method in the classical moment problem, leveraging the universal approximation capabilities for arbitrary distributions. Empirically, we demonstrate that MePoly effectively captures complex non-convex manifolds and outperforms baselines in performance across diverse benchmarks.