Efficient Model-Based Reinforcement Learning for Robot Control via Online Learning

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low sample efficiency and significant sim-to-real gaps for complex robots operating in real-world environments, this paper proposes an online model-based reinforcement learning framework. The method dynamically constructs a dynamics model from real-time interaction data and integrates model predictive control with stochastic optimization for online policy adaptation. We provide theoretical guarantees showing a sublinear regret bound, ensuring continuous performance improvement and inherent adaptability to environmental dynamics. Experiments on a hydraulic excavator arm and a soft robotic manipulator demonstrate that the approach achieves performance comparable to state-of-the-art model-free methods within only a few hours of real-world interaction—substantially reducing sample requirements—while exhibiting strong robustness against abrupt disturbances such as payload changes. The core contribution is the first incorporation of rigorous online learning theory into a model-based RL architecture, uniquely balancing high sample efficiency, minimal reliance on simulation, and real-time adaptability.

Technology Category

Application Category

📝 Abstract
We present an online model-based reinforcement learning algorithm suitable for controlling complex robotic systems directly in the real world. Unlike prevailing sim-to-real pipelines that rely on extensive offline simulation and model-free policy optimization, our method builds a dynamics model from real-time interaction data and performs policy updates guided by the learned dynamics model. This efficient model-based reinforcement learning scheme significantly reduces the number of samples to train control policies, enabling direct training on real-world rollout data. This significantly reduces the influence of bias in the simulated data, and facilitates the search for high-performance control policies. We adopt online learning analysis to derive sublinear regret bounds under standard stochastic online optimization assumptions, providing formal guarantees on performance improvement as more interaction data are collected. Experimental evaluations were performed on a hydraulic excavator arm and a soft robot arm, where the algorithm demonstrates strong sample efficiency compared to model-free reinforcement learning methods, reaching comparable performance within hours. Robust adaptation to shifting dynamics was also observed when the payload condition was randomized. Our approach paves the way toward efficient and reliable on-robot learning for a broad class of challenging control tasks.
Problem

Research questions and friction points this paper is trying to address.

Develops online model-based reinforcement learning for real-world robot control
Reduces sample requirements by learning dynamics from real-time interaction data
Enables robust adaptation to changing dynamics without extensive simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online model-based reinforcement learning for robot control
Builds dynamics model from real-time interaction data
Enables direct training on real-world rollout data
🔎 Similar Papers
No similar papers found.
F
Fang Nan
Robotic Systems Lab, ETH Zürich, Zürich, Switzerland
H
Hao Ma
Learning and Dynamical Systems, Max Planck Institute for Intelligent Systems, Tübingen, Germany
Qinghua Guan
Qinghua Guan
EPFL
Computational designSoft actuators&sensor3D PrintingSoft/Smart MaterialsSoft Robotics
J
Josie Hughes
CREATE Lab, EPFL, Lausanne, Switzerland
Michael Muehlebach
Michael Muehlebach
Max Planck Institute for Intelligent Systems
Machine LearningOptimizationDynamical Systems
Marco Hutter
Marco Hutter
Professor of Robotics, ETH Zurich
Legged RoboticsRoboticsControl