🤖 AI Summary
Existing teleoperation systems for mobile manipulation robots rely on specialized hardware, incur high costs, and suffer from morphological mismatch between human operators and robotic platforms.
Method: This paper proposes a zero-hardware-addition whole-body teleoperation framework based on a novel “human-robot division of labor” paradigm: the operator directly controls the manipulator’s end-effector pose via standard gamepads or teach pendants, while a pre-trained reinforcement learning controller autonomously governs mobile base navigation; kinematic mapping and universal input interfaces eliminate anthropomorphic constraints.
Contribution/Results: We introduce the first lightweight human-robot collaborative teleoperation architecture enabling unconstrained 3D workspace operation. The framework generalizes to novel obstacles and object poses with only five demonstration trials. It significantly improves multi-task completion efficiency and enables high-fidelity imitation learning dataset collection.
📝 Abstract
Demonstration data plays a key role in learning complex behaviors and training robotic foundation models. While effective control interfaces exist for static manipulators, data collection remains cumbersome and time intensive for mobile manipulators due to their large number of degrees of freedom. While specialized hardware, avatars, or motion tracking can enable whole-body control, these approaches are either expensive, robot-specific, or suffer from the embodiment mismatch between robot and human demonstrator. In this work, we present MoMa-Teleop, a novel teleoperation method that infers end-effector motions from existing interfaces and delegates the base motions to a previously developed reinforcement learning agent, leaving the operator to focus fully on the task-relevant end-effector motions. This enables whole-body teleoperation of mobile manipulators with no additional hardware or setup costs via standard interfaces such as joysticks or hand guidance. Moreover, the operator is not bound to a tracked workspace and can move freely with the robot over spatially extended tasks. We demonstrate that our approach results in a significant reduction in task completion time across a variety of robots and tasks. As the generated data covers diverse whole-body motions without embodiment mismatch, it enables efficient imitation learning. By focusing on task-specific end-effector motions, our approach learns skills that transfer to unseen settings, such as new obstacles or changed object positions, from as little as five demonstrations. We make code and videos available at https://moma-teleop.cs.uni-freiburg.de.