MoManipVLA: Transferring Vision-language-action Models for General Mobile Manipulation

πŸ“… 2025-03-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited generalizability of fixed-base vision-language-action (VLA) models to mobile manipulation tasks. To enable efficient zero-shot transfer to mobile platforms, we propose MoManipVLAβ€”a novel framework featuring a two-tier collaborative optimization architecture: an upper tier leverages a pre-trained VLA model to generate semantically consistent base poses, while a lower tier incorporates kinematic constraints to optimize end-effector trajectories. By jointly planning base and manipulator goals, MoManipVLA overcomes the direct migration bottleneck inherent in fixed-base VLA models when deployed in mobile settings. Evaluated on the OVMM benchmark in both simulation and real-robot experiments, MoManipVLA achieves a 4.2% improvement in task success rate. Real-world deployment requires only 50 training iterations, substantially reducing deployment overhead. Comprehensive ablations demonstrate strong cross-task and cross-environment generalization capabilities.

Technology Category

Application Category

πŸ“ Abstract
Mobile manipulation is the fundamental challenge for robotics to assist humans with diverse tasks and environments in everyday life. However, conventional mobile manipulation approaches often struggle to generalize across different tasks and environments because of the lack of large-scale training. In contrast, recent advances in vision-language-action (VLA) models have shown impressive generalization capabilities, but these foundation models are developed for fixed-base manipulation tasks. Therefore, we propose an efficient policy adaptation framework named MoManipVLA to transfer pre-trained VLA models of fix-base manipulation to mobile manipulation, so that high generalization ability across tasks and environments can be achieved in mobile manipulation policy. Specifically, we utilize pre-trained VLA models to generate waypoints of the end-effector with high generalization ability. We design motion planning objectives for the mobile base and the robot arm, which aim at maximizing the physical feasibility of the trajectory. Finally, we present an efficient bi-level objective optimization framework for trajectory generation, where the upper-level optimization predicts waypoints for base movement to enhance the manipulator policy space, and the lower-level optimization selects the optimal end-effector trajectory to complete the manipulation task. In this way, MoManipVLA can adjust the position of the robot base in a zero-shot manner, thus making the waypoints predicted from the fixed-base VLA models feasible. Extensive experimental results on OVMM and the real world demonstrate that MoManipVLA achieves a 4.2% higher success rate than the state-of-the-art mobile manipulation, and only requires 50 training cost for real world deployment due to the strong generalization ability in the pre-trained VLA models.
Problem

Research questions and friction points this paper is trying to address.

Transferring vision-language-action models to mobile manipulation tasks.
Enhancing generalization across diverse tasks and environments.
Optimizing robot base and arm motion for feasible trajectories.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transfers pre-trained VLA models to mobile manipulation
Designs motion planning for base and arm feasibility
Implements bi-level optimization for trajectory generation
Z
Zhenyu Wu
Beijing University of Posts and Telecommunications
Y
Yuheng Zhou
Nanyang Technological University
Xiuwei Xu
Xiuwei Xu
Tsinghua University
computer visionembodied AI
Z
Ziwei Wang
Nanyang Technological University
Haibin Yan
Haibin Yan
Beijing University of Posts and Telecommunications
Computer VisionPattern RecognitionRobotics