The Role of Embodiment in Intuitive Whole-Body Teleoperation for Mobile Manipulation

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the dual challenges of low-quality demonstration data and high operator cognitive load in teleoperation of mobile manipulation robots. We propose an embodied teleoperation framework featuring coupled control of the robotic arm and mobile base. Through systematic comparison of coupled versus decoupled control paradigms under two visual feedback modalities—virtual reality (VR) and conventional screen-based interfaces—we find that while VR enhances immersion, it significantly increases task completion time and subjective cognitive workload. In contrast, coupled control substantially improves both the quality of imitation-learning demonstrations and downstream policy performance—without elevating operator burden. To our knowledge, this work is the first to empirically quantify the long-term cognitive cost of VR feedback in whole-body coordinated teleoperation tasks and to demonstrate the human-factor advantages of coupled control for acquiring high-dimensional, kinematically coherent demonstration data. Our findings provide a validated interaction paradigm and empirical foundation for learning-oriented teleoperation interface design.

Technology Category

Application Category

📝 Abstract
Intuitive Teleoperation interfaces are essential for mobile manipulation robots to ensure high quality data collection while reducing operator workload. A strong sense of embodiment combined with minimal physical and cognitive demands not only enhances the user experience during large-scale data collection, but also helps maintain data quality over extended periods. This becomes especially crucial for challenging long-horizon mobile manipulation tasks that require whole-body coordination. We compare two distinct robot control paradigms: a coupled embodiment integrating arm manipulation and base navigation functions, and a decoupled embodiment treating these systems as separate control entities. Additionally, we evaluate two visual feedback mechanisms: immersive virtual reality and conventional screen-based visualization of the robot's field of view. These configurations were systematically assessed across a complex, multi-stage task sequence requiring integrated planning and execution. Our results show that the use of VR as a feedback modality increases task completion time, cognitive workload, and perceived effort of the teleoperator. Coupling manipulation and navigation leads to a comparable workload on the user as decoupling the embodiments, while preliminary experiments suggest that data acquired by coupled teleoperation leads to better imitation learning performance. Our holistic view on intuitive teleoperation interfaces provides valuable insight into collecting high-quality, high-dimensional mobile manipulation data at scale with the human operator in mind. Project website:https://sophiamoyen.github.io/role-embodiment-wbc-moma-teleop/
Problem

Research questions and friction points this paper is trying to address.

Evaluating intuitive teleoperation interfaces for mobile manipulation robots
Comparing coupled versus decoupled control paradigms for whole-body coordination
Assessing VR versus screen-based visual feedback for operator performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

VR feedback modality increases task time
Coupled manipulation navigation comparable user workload
Coupled teleoperation improves imitation learning performance
🔎 Similar Papers
No similar papers found.
S
Sophia Bianchi Moyen
Computer Science Department, TU Darmstadt, Germany; University of S˜ao Paulo, Brazil
R
Rickmer Krohn
Computer Science Department, TU Darmstadt, Germany
Sophie Lueth
Sophie Lueth
PhD student @ PEARL lab, TU Darmstadt
Mobile ManipulationRobot LearningHRI
K
Kay Pompetzki
Computer Science Department, TU Darmstadt, Germany
J
Jan Peters
Computer Science Department, TU Darmstadt, Germany; Centre for Cognitive Science, TU Darmstadt, Germany; Systems AI for Robot Learning, German Research Center for AI (DFKI)
Vignesh Prasad
Vignesh Prasad
TU Darmstadt
Robot LearningComputer VisionBimanual ManipulationHuman Robot Interaction
Georgia Chalvatzaki
Georgia Chalvatzaki
Professor for Interactive Robot Perception and Learning, Technische Universität Darmstadt
RoboticsMachine LearningReinforcement LearningRobot PerceptionHRI