A Pontryagin Method of Model-based Reinforcement Learning via Hamiltonian Actor-Critic

πŸ“… 2026-03-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the performance limitations of model-based reinforcement learning caused by error accumulation in long-horizon predictions, which distorts value estimation. The authors introduce Hamiltonian Actor-Critic (HAC), the first method to integrate Pontryagin’s Maximum Principle into an Actor-Critic framework, bypassing explicit value function learning by directly optimizing the Hamiltonian. This approach reduces sensitivity to dynamics model inaccuracies and combines deterministic dynamics modeling with multi-step policy optimization. Evaluated on both online and offline continuous control tasks, HAC consistently outperforms model-free and Model Value Expansion (MVE) baselines, demonstrating superior sample efficiency, faster convergence, and enhanced out-of-distribution generalization.
πŸ“ Abstract
Model-based reinforcement learning (MBRL) improves sample efficiency by leveraging learned dynamics models for policy optimization. However, the effectiveness of methods such as actor-critic is often limited by compounding model errors, which degrade long-horizon value estimation. Existing approaches, such as Model-Based Value Expansion (MVE), partially mitigate this issue through multi-step rollouts, but remain sensitive to rollout horizon selection and residual model bias. Motivated by the Pontryagin Maximum Principle (PMP), we propose Hamiltonian Actor-Critic (HAC), a model-based approach that eliminates explicit value function learning by directly optimizing a Hamiltonian defined over the learned dynamics and reward for deterministic systems. By avoiding value approximation, HAC reduces sensitivity to model errors while admitting convergence guarantees. Extensive experiments on continuous control benchmarks, in both online and offline RL settings, demonstrate that HAC outperforms model-free and MVE-based baselines in control performance, convergence speed, and robustness to distributional shift, including out-of-distribution (OOD) scenarios. In offline settings with limited data, HAC matches or exceeds state-of-the-art methods, highlighting its strong sample efficiency.
Problem

Research questions and friction points this paper is trying to address.

model-based reinforcement learning
compounding model errors
long-horizon value estimation
rollout horizon selection
model bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hamiltonian Actor-Critic
Pontryagin Maximum Principle
Model-based Reinforcement Learning
Value Function Elimination
Sample Efficiency
πŸ”Ž Similar Papers
No similar papers found.
C
Chengyang Gu
Information Hub, HKUST (Guangzhou), Guangzhou, China
Y
Yuxin Pan
Department of Computer Science, City University of Hong Kong, Hong Kong, China
Hui Xiong
Hui Xiong
Senior Scientist, Candela Corporation
Ultrafast dynamicsatomic molecular physicsfree electron laser
Yize Chen
Yize Chen
Assistant Professor, University of Alberta
Machine LearningPower SystemsOptimizationControl