🤖 AI Summary
To address the dual challenges of modeling multimodal action distributions and scarce training data in bimanual robotic manipulation, this paper introduces RDT-1B—the first bimanual diffusion foundation model (1.2 billion parameters). Methodologically: (i) we formulate a physically interpretable unified action space that explicitly encodes bimanual kinematic constraints; (ii) we design a scalable Transformer architecture that fuses heterogeneous multimodal observations—vision, language, and proprioceptive state; and (iii) we pretrain the model on large-scale, diverse multi-robot datasets, enabling few-shot adaptation. Experiments demonstrate that RDT-1B achieves zero-shot cross-task generalization, natural language instruction grounding, and skill transfer from only 1–5 demonstration examples on real robots—outperforming all existing baselines by significant margins.
📝 Abstract
Bimanual manipulation is essential in robotics, yet developing foundation models is extremely challenging due to the inherent complexity of coordinating two robot arms (leading to multi-modal action distributions) and the scarcity of training data. In this paper, we present the Robotics Diffusion Transformer (RDT), a pioneering diffusion foundation model for bimanual manipulation. RDT builds on diffusion models to effectively represent multi-modality, with innovative designs of a scalable Transformer to deal with the heterogeneity of multi-modal inputs and to capture the nonlinearity and high frequency of robotic data. To address data scarcity, we further introduce a Physically Interpretable Unified Action Space, which can unify the action representations of various robots while preserving the physical meanings of original actions, facilitating learning transferrable physical knowledge. With these designs, we managed to pre-train RDT on the largest collection of multi-robot datasets to date and scaled it up to 1.2B parameters, which is the largest diffusion-based foundation model for robotic manipulation. We finally fine-tuned RDT on a self-created multi-task bimanual dataset with over 6K+ episodes to refine its manipulation capabilities. Experiments on real robots demonstrate that RDT significantly outperforms existing methods. It exhibits zero-shot generalization to unseen objects and scenes, understands and follows language instructions, learns new skills with just 1~5 demonstrations, and effectively handles complex, dexterous tasks. We refer to https://rdt-robotics.github.io/rdt-robotics/ for the code and videos.