MoE-ACT: Scaling Multi-Task Bimanual Manipulation with Sparse Language-Conditioned Mixture-of-Experts Transformers

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of policy interference and negative transfer in multi-task dual-arm manipulation arising from task distribution shifts. To mitigate these issues, the authors propose MoE-ACT, a lightweight multi-task imitation learning framework that integrates a sparse mixture-of-experts (MoE) module into the Transformer encoder of Action Chunking Transformers (ACT). By incorporating FiLM modulation and a multi-scale cross-attention mechanism, MoE-ACT enables language-conditioned disentanglement of task-specific policies, adaptive expert activation, and semantic alignment. Extensive experiments on both simulated and real-world dual-arm platforms demonstrate that MoE-ACT achieves a 33% average improvement in task success rate over the baseline ACT, significantly enhancing multi-task generalization and robustness.

Technology Category

Application Category

📝 Abstract
The ability of robots to handle multiple tasks under a unified policy is critical for deploying embodied intelligence in real-world household and industrial applications. However, out-of-distribution variation across tasks often causes severe task interference and negative transfer when training general robotic policies. To address this challenge, we propose a lightweight multi-task imitation learning framework for bimanual manipulation, termed Mixture-of-Experts-Enhanced Action Chunking Transformer (MoE-ACT), which integrates sparse Mixture-of-Experts (MoE) modules into the Transformer encoder of ACT. The MoE layer decomposes a unified task policy into independently invoked expert components. Through adaptive activation, it naturally decouples multi-task action distributions in latent space. During decoding, Feature-wise Linear Modulation (FiLM) dynamically modulates action tokens to improve consistency between action generation and task instructions. In parallel, multi-scale cross-attention enables the policy to simultaneously focus on both low-level and high-level semantic features, providing rich visual information for robotic manipulation. We further incorporate textual information, transitioning the framework from a purely vision-based model to a vision-centric, language-conditioned action generation system. Experimental validation in both simulation and a real-world dual-arm setup shows that MoE-ACT substantially improves multi-task performance. Specifically, MoE-ACT outperforms vanilla ACT by an average of 33% in success rate. These results indicate that MoE-ACT provides stronger robustness and generalization in complex multi-task bimanual manipulation environments. Our open-source project page can be found at https://j3k7.github.io/MoE-ACT/.
Problem

Research questions and friction points this paper is trying to address.

multi-task manipulation
task interference
negative transfer
bimanual manipulation
embodied intelligence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts
Language-Conditioned Imitation Learning
Bimanual Manipulation
Action Chunking Transformer
Multi-task Policy
🔎 Similar Papers
No similar papers found.
K
Kangjun Guo
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511453, China
H
Haichao Liu
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511453, China
Y
Yanji Sun
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511453, China
R
Ruhan Zhao
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511453, China
Jinni Zhou
Jinni Zhou
HKUST(GZ), HKUST
J
Jun Ma
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511453, China, and also with The Hong Kong University of Science and Technology, Hong Kong SAR, China