๐ค AI Summary
To address weak generalization across domains and the lack of explicit visual-action alignment in cross-domain trajectory prediction, this paper proposes Tra-MoE. Methodologically, it introduces: (i) a novel Top-1 sparse-gated Mixture-of-Experts (MoE) architecture that balances domain-specific parameter specialization with cross-domain collaboration; (ii) an adaptive conditioning mechanism leveraging 2D masked representations to explicitly model fine-grained alignment between visual observations and trajectory generationโfirst of its kind; and (iii) joint cross-domain multi-task pretraining with vision-trajectory co-modeling. Evaluated on both simulation and real-robot tasks, Tra-MoE achieves substantial performance gains over dense baselines at comparable parameter count and constant FLOPs per token. It demonstrates significantly improved generalization and scalability, enabling high-precision, instruction-driven, fine-grained control for robotic policy learning.
๐ Abstract
Learning from multiple domains is a primary factor that influences the generalization of a single unified robot system. In this paper, we aim to learn the trajectory prediction model by using broad out-of-domain data to improve its performance and generalization ability. Trajectory model is designed to predict any-point trajectories in the current frame given an instruction and can provide detailed control guidance for robotic policy learning. To handle the diverse out-of-domain data distribution, we propose a sparsely-gated MoE ( extbf{Top-1} gating strategy) architecture for trajectory model, coined as extbf{Tra-MoE}. The sparse activation design enables good balance between parameter cooperation and specialization, effectively benefiting from large-scale out-of-domain data while maintaining constant FLOPs per token. In addition, we further introduce an adaptive policy conditioning technique by learning 2D mask representations for predicted trajectories, which is explicitly aligned with image observations to guide action prediction more flexibly. We perform extensive experiments on both simulation and real-world scenarios to verify the effectiveness of Tra-MoE and adaptive policy conditioning technique. We also conduct a comprehensive empirical study to train Tra-MoE, demonstrating that our Tra-MoE consistently exhibits superior performance compared to the dense baseline model, even when the latter is scaled to match Tra-MoE's parameter count.