RT-HCP: Dealing with Inference Delays and Sample Efficiency to Learn Directly on Robotic Platforms

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robotics faces dual challenges in directly learning controllers: low sample efficiency and high inference latency—hindering real-time operation at control frequencies required by hardware. To address this, we propose a general-purpose real-time adaptation framework and a lightweight Real-Time Hierarchical Control Policy (RT-HCP) algorithm. RT-HCP integrates model-based reinforcement learning, action-sequence prediction, and online policy fine-tuning to preserve control continuity while drastically reducing inference overhead. To our knowledge, this is the first work achieving efficient kHz-frequency online learning on a physical Furuta pendulum platform; it improves sample efficiency by over an order of magnitude compared to state-of-the-art RL methods and maintains inference latency consistently below 1 ms. Our approach achieves an optimal trade-off among high-frequency real-time performance, sample efficiency, and deployment robustness, establishing a scalable new paradigm for end-to-end real-time robotic control.

Technology Category

Application Category

📝 Abstract
Learning a controller directly on the robot requires extreme sample efficiency. Model-based reinforcement learning (RL) methods are the most sample efficient, but they often suffer from a too long inference time to meet the robot control frequency requirements. In this paper, we address the sample efficiency and inference time challenges with two contributions. First, we define a general framework to deal with inference delays where the slow inference robot controller provides a sequence of actions to feed the control-hungry robotic platform without execution gaps. Then, we compare several RL algorithms in the light of this framework and propose RT-HCP, an algorithm that offers an excellent trade-off between performance, sample efficiency and inference time. We validate the superiority of RT-HCP with experiments where we learn a controller directly on a simple but high frequency FURUTA pendulum platform. Code: github.com/elasriz/RTHCP
Problem

Research questions and friction points this paper is trying to address.

Addressing sample efficiency in robot controller learning
Reducing inference delays for high-frequency robotic control
Developing real-time model-based reinforcement learning algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

General framework for handling inference delays
RT-HCP algorithm balancing performance and efficiency
Action sequence feeding to prevent execution gaps
🔎 Similar Papers
No similar papers found.