🤖 AI Summary
Existing remote robotic operating systems predominantly support unidirectional control, hindering real-time synchronization between robot state and physical hardware—thus limiting effective human intervention and autonomous learning. This paper proposes a bidirectional real-time teleoperation system operating in a “co-pilot” mode to enable natural human involvement: (1) a novel automotive-inspired steering-wheel interface for intuitive human-robot collaboration; (2) joint-level bidirectional real-time force/position mapping and synchronization on low-cost hardware (3D-printed structures + commercial low-power motors); and (3) a ROS-based architecture enabling seamless human-in-the-loop integration and closed-loop generation of high-fidelity action-correction datasets. Experiments demonstrate substantial improvements in imitation learning data efficiency and recovery capability: reduced demonstration sample requirements, higher task success rates in both imitation learning (IL) and reinforcement learning (RL), and stable human-in-the-loop RL training.
📝 Abstract
Teleoperation is essential for autonomous robot learning, especially in manipulation tasks that require human demonstrations or corrections. However, most existing systems only offer unilateral robot control and lack the ability to synchronize the robot's status with the teleoperation hardware, preventing real-time, flexible intervention. In this work, we introduce HACTS (Human-As-Copilot Teleoperation System), a novel system that establishes bilateral, real-time joint synchronization between a robot arm and teleoperation hardware. This simple yet effective feedback mechanism, akin to a steering wheel in autonomous vehicles, enables the human copilot to intervene seamlessly while collecting action-correction data for future learning. Implemented using 3D-printed components and low-cost, off-the-shelf motors, HACTS is both accessible and scalable. Our experiments show that HACTS significantly enhances performance in imitation learning (IL) and reinforcement learning (RL) tasks, boosting IL recovery capabilities and data efficiency, and facilitating human-in-the-loop RL. HACTS paves the way for more effective and interactive human-robot collaboration and data-collection, advancing the capabilities of robot manipulation.