🤖 AI Summary
This work addresses the challenge of achieving global stabilization for underactuated systems—such as the Pendubot and Acrobot—through a model-based reinforcement learning (MBRL) approach. The method extends the MC-PILCO algorithm to two-link underactuated systems for the first time, integrating Gaussian process dynamics modeling, Monte Carlo policy optimization, and Bayesian uncertainty quantification. It achieves high-fidelity model learning and end-to-end global control policy optimization using minimal interaction data—approximately 200 seconds per task—without requiring piecewise controllers or manual energy shaping. The approach demonstrates both efficacy and robustness in simulation and on real hardware platforms. It secured consecutive championship titles in the ICRA 2025 AI Olympics RealAIGym competition, achieving over a tenfold improvement in sample efficiency compared to state-of-the-art model-free methods.
📝 Abstract
This short paper describes our proposed solution for the third edition of the"AI Olympics with RealAIGym"competition, held at ICRA 2025. We employed Monte-Carlo Probabilistic Inference for Learning Control (MC-PILCO), an MBRL algorithm recognized for its exceptional data efficiency across various low-dimensional robotic tasks, including cart-pole, ball &plate, and Furuta pendulum systems. MC-PILCO optimizes a system dynamics model using interaction data, enabling policy refinement through simulation rather than direct system data optimization. This approach has proven highly effective in physical systems, offering greater data efficiency than Model-Free (MF) alternatives. Notably, MC-PILCO has previously won the first two editions of this competition, demonstrating its robustness in both simulated and real-world environments. Besides briefly reviewing the algorithm, we discuss the most critical aspects of the MC-PILCO implementation in the tasks at hand: learning a global policy for the pendubot and acrobot systems.