🤖 AI Summary
Weak generalization of unimodal diffusion policies (DPs) and high computational cost of multimodal joint training hinder scalable robotic policy learning. To address this, we propose Modality-Composable Diffusion Policies (MCDP), a framework that dynamically composes pre-trained RGB and point-cloud DPs at inference time—without additional training—via distribution-level score fusion. Grounded in score-based generative modeling, MCDP introduces multimodal distribution alignment and weighted score aggregation, and integrates the RoboTwin simulation environment to enable cross-modal policy coordination. Its core innovation is the first “modality-composable” DP paradigm, supporting zero-shot cross-domain and cross-form factor transfer. Evaluated on the RoboTwin dataset, MCDP achieves an average task success rate 12.7% higher than unimodal DPs, demonstrating significantly improved robustness while incurring zero inference-time training overhead.
📝 Abstract
Diffusion Policy (DP) has attracted significant attention as an effective method for policy representation due to its capacity to model multi-distribution dynamics. However, current DPs are often based on a single visual modality (e.g., RGB or point cloud), limiting their accuracy and generalization potential. Although training a generalized DP capable of handling heterogeneous multimodal data would enhance performance, it entails substantial computational and data-related costs. To address these challenges, we propose a novel policy composition method: by leveraging multiple pre-trained DPs based on individual visual modalities, we can combine their distributional scores to form a more expressive Modality-Composable Diffusion Policy (MCDP), without the need for additional training. Through extensive empirical experiments on the RoboTwin dataset, we demonstrate the potential of MCDP to improve both adaptability and performance. This exploration aims to provide valuable insights into the flexible composition of existing DPs, facilitating the development of generalizable cross-modality, cross-domain, and even cross-embodiment policies. Our code is open-sourced at https://github.com/AndyCao1125/MCDP.