π€ AI Summary
This work addresses the limited generalizability of fixed-base vision-language-action (VLA) models to mobile manipulation tasks. To enable efficient zero-shot transfer to mobile platforms, we propose MoManipVLAβa novel framework featuring a two-tier collaborative optimization architecture: an upper tier leverages a pre-trained VLA model to generate semantically consistent base poses, while a lower tier incorporates kinematic constraints to optimize end-effector trajectories. By jointly planning base and manipulator goals, MoManipVLA overcomes the direct migration bottleneck inherent in fixed-base VLA models when deployed in mobile settings. Evaluated on the OVMM benchmark in both simulation and real-robot experiments, MoManipVLA achieves a 4.2% improvement in task success rate. Real-world deployment requires only 50 training iterations, substantially reducing deployment overhead. Comprehensive ablations demonstrate strong cross-task and cross-environment generalization capabilities.
π Abstract
Mobile manipulation is the fundamental challenge for robotics to assist humans with diverse tasks and environments in everyday life. However, conventional mobile manipulation approaches often struggle to generalize across different tasks and environments because of the lack of large-scale training. In contrast, recent advances in vision-language-action (VLA) models have shown impressive generalization capabilities, but these foundation models are developed for fixed-base manipulation tasks. Therefore, we propose an efficient policy adaptation framework named MoManipVLA to transfer pre-trained VLA models of fix-base manipulation to mobile manipulation, so that high generalization ability across tasks and environments can be achieved in mobile manipulation policy. Specifically, we utilize pre-trained VLA models to generate waypoints of the end-effector with high generalization ability. We design motion planning objectives for the mobile base and the robot arm, which aim at maximizing the physical feasibility of the trajectory. Finally, we present an efficient bi-level objective optimization framework for trajectory generation, where the upper-level optimization predicts waypoints for base movement to enhance the manipulator policy space, and the lower-level optimization selects the optimal end-effector trajectory to complete the manipulation task. In this way, MoManipVLA can adjust the position of the robot base in a zero-shot manner, thus making the waypoints predicted from the fixed-base VLA models feasible. Extensive experimental results on OVMM and the real world demonstrate that MoManipVLA achieves a 4.2% higher success rate than the state-of-the-art mobile manipulation, and only requires 50 training cost for real world deployment due to the strong generalization ability in the pre-trained VLA models.