MergeVLA: Cross-Skill Model Merging Toward a Generalist Vision-Language-Action Agent

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
VLA models suffer significant performance degradation when merging cross-skill capabilities, primarily due to inter-task parameter interference and strong inter-layer dependencies in action modules. To address this, we propose MergeVLA: (1) a sparse-activation LoRA adapter with task-specific masks to enforce parameter consistency across tasks; (2) a decoupled, pure cross-attention action module that eliminates inter-layer dependencies; and (3) an unsupervised test-time task routing mechanism enabling plug-and-play multi-skill composition. Built upon vision-language foundation models, MergeVLA supports joint multi-task training without additional annotations. Evaluated on LIBERO, LIBERO-Plus, RoboTwin, and a real SO101 robotic arm, MergeVLA matches or surpasses single-task expert models—achieving, for the first time, a VLA agent that is highly generalizable, mergeable, and fine-tuning-free.

Technology Category

Application Category

📝 Abstract
Recent Vision-Language-Action (VLA) models reformulate vision-language models by tuning them with millions of robotic demonstrations. While they perform well when fine-tuned for a single embodiment or task family, extending them to multi-skill settings remains challenging: directly merging VLA experts trained on different tasks results in near-zero success rates. This raises a fundamental question: what prevents VLAs from mastering multiple skills within one model? With an empirical decomposition of learnable parameters during VLA fine-tuning, we identify two key sources of non-mergeability: (1) Finetuning drives LoRA adapters in the VLM backbone toward divergent, task-specific directions beyond the capacity of existing merging methods to unify. (2) Action experts develop inter-block dependencies through self-attention feedback, causing task information to spread across layers and preventing modular recombination. To address these challenges, we present MergeVLA, a merging-oriented VLA architecture that preserves mergeability by design. MergeVLA introduces sparsely activated LoRA adapters via task masks to retain consistent parameters and reduce irreconcilable conflicts in the VLM. Its action expert replaces self-attention with cross-attention-only blocks to keep specialization localized and composable. When the task is unknown, it uses a test-time task router to adaptively select the appropriate task mask and expert head from the initial observation, enabling unsupervised task inference. Across LIBERO, LIBERO-Plus, RoboTwin, and multi-task experiments on the real SO101 robotic arm, MergeVLA achieves performance comparable to or even exceeding individually finetuned experts, demonstrating robust generalization across tasks, embodiments, and environments.
Problem

Research questions and friction points this paper is trying to address.

VLA models fail when merging experts trained on different robotic tasks
LoRA adapters diverge beyond capacity of existing merging methods
Action experts develop dependencies preventing modular recombination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparsely activated LoRA adapters via task masks
Cross-attention-only blocks for action expert
Test-time task router for unsupervised task inference
🔎 Similar Papers
Y
Yuxia Fu
UQMM Lab, The University of Queensland
Z
Zhizhen Zhang
UQMM Lab, The University of Queensland
Y
Yuqi Zhang
UQMM Lab, The University of Queensland
Z
Zijian Wang
UQMM Lab, The University of Queensland
Zi Huang
Zi Huang
PhD Candidate
Deep Learning
Yadan Luo
Yadan Luo
ARC DECRA and Senior Lecturer, University of Queensland
Generalization3D VisionAutonomous Driving