MoTo: A Zero-shot Plug-in Interaction-aware Navigation for General Mobile Manipulation

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mobile manipulation robots face two key bottlenecks: poor generalization—due to reliance on large-scale, task-specific training in conventional methods—and limited mobility in fixed-base models, hindering adaptation to dynamic environments. To address these, this paper proposes a zero-shot, plug-and-play mobile manipulation framework that requires no additional training data. It enables arbitrary off-the-shelf fixed-base manipulation models to be instantly extended into mobile manipulation agents via interactive keypoint modeling and multi-view consistent vision-language reasoning. Furthermore, we introduce a base-arm coordinated trajectory optimization objective to ensure operational feasibility. Evaluated on the OVMM benchmark and real-world scenarios, our method achieves absolute improvements of 2.68% and 16.67% in task success rate, respectively—demonstrating superior generalization, practicality, and deployment efficiency.

Technology Category

Application Category

📝 Abstract
Mobile manipulation stands as a core challenge in robotics, enabling robots to assist humans across varied tasks and dynamic daily environments. Conventional mobile manipulation approaches often struggle to generalize across different tasks and environments due to the lack of large-scale training. However, recent advances in manipulation foundation models demonstrate impressive generalization capability on a wide range of fixed-base manipulation tasks, which are still limited to a fixed setting. Therefore, we devise a plug-in module named MoTo, which can be combined with any off-the-shelf manipulation foundation model to empower them with mobile manipulation ability. Specifically, we propose an interaction-aware navigation policy to generate robot docking points for generalized mobile manipulation. To enable zero-shot ability, we propose an interaction keypoints framework via vision-language models (VLM) under multi-view consistency for both target object and robotic arm following instructions, where fixed-base manipulation foundation models can be employed. We further propose motion planning objectives for the mobile base and robot arm, which minimize the distance between the two keypoints and maintain the physical feasibility of trajectories. In this way, MoTo guides the robot to move to the docking points where fixed-base manipulation can be successfully performed, and leverages VLM generation and trajectory optimization to achieve mobile manipulation in a zero-shot manner, without any requirement on mobile manipulation expert data. Extensive experimental results on OVMM and real-world demonstrate that MoTo achieves success rates of 2.68% and 16.67% higher than the state-of-the-art mobile manipulation methods, respectively, without requiring additional training data.
Problem

Research questions and friction points this paper is trying to address.

Enabling zero-shot mobile manipulation without expert data
Generalizing manipulation across tasks and environments
Generating interaction-aware navigation for robot docking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-in module for mobile manipulation enhancement
Interaction-aware navigation policy for docking points
Vision-language models for zero-shot keypoint generation
🔎 Similar Papers
No similar papers found.
Z
Zhenyu Wu
School of IEA, Beijing University of Posts and Telecommunications
A
Angyuan Ma
Department of Automation, Tsinghua University
Xiuwei Xu
Xiuwei Xu
Tsinghua University
computer visionembodied AI
H
Hang Yin
Department of Automation, Tsinghua University
Y
Yinan Liang
Department of Automation, Tsinghua University
Z
Ziwei Wang
School of Electrical and Electronic Engineering, Nanyang Technological University
J
Jiwen Lu
Department of Automation, Tsinghua University
Haibin Yan
Haibin Yan
Beijing University of Posts and Telecommunications
Computer VisionPattern RecognitionRobotics