π€ AI Summary
This work addresses the performance limitations of model-based reinforcement learning caused by error accumulation in long-horizon predictions, which distorts value estimation. The authors introduce Hamiltonian Actor-Critic (HAC), the first method to integrate Pontryaginβs Maximum Principle into an Actor-Critic framework, bypassing explicit value function learning by directly optimizing the Hamiltonian. This approach reduces sensitivity to dynamics model inaccuracies and combines deterministic dynamics modeling with multi-step policy optimization. Evaluated on both online and offline continuous control tasks, HAC consistently outperforms model-free and Model Value Expansion (MVE) baselines, demonstrating superior sample efficiency, faster convergence, and enhanced out-of-distribution generalization.
π Abstract
Model-based reinforcement learning (MBRL) improves sample efficiency by leveraging learned dynamics models for policy optimization. However, the effectiveness of methods such as actor-critic is often limited by compounding model errors, which degrade long-horizon value estimation. Existing approaches, such as Model-Based Value Expansion (MVE), partially mitigate this issue through multi-step rollouts, but remain sensitive to rollout horizon selection and residual model bias. Motivated by the Pontryagin Maximum Principle (PMP), we propose Hamiltonian Actor-Critic (HAC), a model-based approach that eliminates explicit value function learning by directly optimizing a Hamiltonian defined over the learned dynamics and reward for deterministic systems. By avoiding value approximation, HAC reduces sensitivity to model errors while admitting convergence guarantees. Extensive experiments on continuous control benchmarks, in both online and offline RL settings, demonstrate that HAC outperforms model-free and MVE-based baselines in control performance, convergence speed, and robustness to distributional shift, including out-of-distribution (OOD) scenarios. In offline settings with limited data, HAC matches or exceeds state-of-the-art methods, highlighting its strong sample efficiency.