🤖 AI Summary
This work addresses the challenges of policy interference and negative transfer in multi-task dual-arm manipulation arising from task distribution shifts. To mitigate these issues, the authors propose MoE-ACT, a lightweight multi-task imitation learning framework that integrates a sparse mixture-of-experts (MoE) module into the Transformer encoder of Action Chunking Transformers (ACT). By incorporating FiLM modulation and a multi-scale cross-attention mechanism, MoE-ACT enables language-conditioned disentanglement of task-specific policies, adaptive expert activation, and semantic alignment. Extensive experiments on both simulated and real-world dual-arm platforms demonstrate that MoE-ACT achieves a 33% average improvement in task success rate over the baseline ACT, significantly enhancing multi-task generalization and robustness.
📝 Abstract
The ability of robots to handle multiple tasks under a unified policy is critical for deploying embodied intelligence in real-world household and industrial applications. However, out-of-distribution variation across tasks often causes severe task interference and negative transfer when training general robotic policies. To address this challenge, we propose a lightweight multi-task imitation learning framework for bimanual manipulation, termed Mixture-of-Experts-Enhanced Action Chunking Transformer (MoE-ACT), which integrates sparse Mixture-of-Experts (MoE) modules into the Transformer encoder of ACT. The MoE layer decomposes a unified task policy into independently invoked expert components. Through adaptive activation, it naturally decouples multi-task action distributions in latent space. During decoding, Feature-wise Linear Modulation (FiLM) dynamically modulates action tokens to improve consistency between action generation and task instructions. In parallel, multi-scale cross-attention enables the policy to simultaneously focus on both low-level and high-level semantic features, providing rich visual information for robotic manipulation. We further incorporate textual information, transitioning the framework from a purely vision-based model to a vision-centric, language-conditioned action generation system. Experimental validation in both simulation and a real-world dual-arm setup shows that MoE-ACT substantially improves multi-task performance. Specifically, MoE-ACT outperforms vanilla ACT by an average of 33% in success rate. These results indicate that MoE-ACT provides stronger robustness and generalization in complex multi-task bimanual manipulation environments. Our open-source project page can be found at https://j3k7.github.io/MoE-ACT/.