🤖 AI Summary
Robotics faces dual challenges in directly learning controllers: low sample efficiency and high inference latency—hindering real-time operation at control frequencies required by hardware. To address this, we propose a general-purpose real-time adaptation framework and a lightweight Real-Time Hierarchical Control Policy (RT-HCP) algorithm. RT-HCP integrates model-based reinforcement learning, action-sequence prediction, and online policy fine-tuning to preserve control continuity while drastically reducing inference overhead. To our knowledge, this is the first work achieving efficient kHz-frequency online learning on a physical Furuta pendulum platform; it improves sample efficiency by over an order of magnitude compared to state-of-the-art RL methods and maintains inference latency consistently below 1 ms. Our approach achieves an optimal trade-off among high-frequency real-time performance, sample efficiency, and deployment robustness, establishing a scalable new paradigm for end-to-end real-time robotic control.
📝 Abstract
Learning a controller directly on the robot requires extreme sample efficiency. Model-based reinforcement learning (RL) methods are the most sample efficient, but they often suffer from a too long inference time to meet the robot control frequency requirements. In this paper, we address the sample efficiency and inference time challenges with two contributions. First, we define a general framework to deal with inference delays where the slow inference robot controller provides a sequence of actions to feed the control-hungry robotic platform without execution gaps. Then, we compare several RL algorithms in the light of this framework and propose RT-HCP, an algorithm that offers an excellent trade-off between performance, sample efficiency and inference time. We validate the superiority of RT-HCP with experiments where we learn a controller directly on a simple but high frequency FURUTA pendulum platform. Code: github.com/elasriz/RTHCP