🤖 AI Summary
To address the challenge of multi-task generalization for quadrupedal robots, this work proposes a unified Vision–Language–Action (VLA) framework. Methodologically: (1) we design a sparsely activated “Mixture of Robot Experts” (MoRE) architecture, integrating low-rank adaptation (LoRA) modules into a multimodal large model; (2) we introduce the first end-to-end reinforcement learning paradigm formalized via Q-functions, enabling deep coupling between the VLA model and task structure; and (3) we perform efficient knowledge distillation and fine-tuning using automatically collected, mixed-quality embodied data. Experiments demonstrate that our approach consistently outperforms baselines across six embodied locomotion and manipulation skills, achieves significant improvements in out-of-distribution generalization, and successfully deploys on real-world quadrupedal robots—validating its robustness, practicality, and scalability.
📝 Abstract
Developing versatile quadruped robots that can smoothly perform various actions and tasks in real-world environments remains a significant challenge. This paper introduces a novel vision-language-action (VLA) model, mixture of robotic experts (MoRE), for quadruped robots that aim to introduce reinforcement learning (RL) for fine-tuning large-scale VLA models with a large amount of mixed-quality data. MoRE integrates multiple low-rank adaptation modules as distinct experts within a dense multi-modal large language model (MLLM), forming a sparse-activated mixture-of-experts model. This design enables the model to effectively adapt to a wide array of downstream tasks. Moreover, we employ a reinforcement learning-based training objective to train our model as a Q-function after deeply exploring the structural properties of our tasks. Effective learning from automatically collected mixed-quality data enhances data efficiency and model performance. Extensive experiments demonstrate that MoRE outperforms all baselines across six different skills and exhibits superior generalization capabilities in out-of-distribution scenarios. We further validate our method in real-world scenarios, confirming the practicality of our approach and laying a solid foundation for future research on multi-task learning in quadruped robots.