Modality-Composable Diffusion Policy via Inference-Time Distribution-level Composition

📅 2025-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weak generalization of unimodal diffusion policies (DPs) and high computational cost of multimodal joint training hinder scalable robotic policy learning. To address this, we propose Modality-Composable Diffusion Policies (MCDP), a framework that dynamically composes pre-trained RGB and point-cloud DPs at inference time—without additional training—via distribution-level score fusion. Grounded in score-based generative modeling, MCDP introduces multimodal distribution alignment and weighted score aggregation, and integrates the RoboTwin simulation environment to enable cross-modal policy coordination. Its core innovation is the first “modality-composable” DP paradigm, supporting zero-shot cross-domain and cross-form factor transfer. Evaluated on the RoboTwin dataset, MCDP achieves an average task success rate 12.7% higher than unimodal DPs, demonstrating significantly improved robustness while incurring zero inference-time training overhead.

Technology Category

Application Category

📝 Abstract
Diffusion Policy (DP) has attracted significant attention as an effective method for policy representation due to its capacity to model multi-distribution dynamics. However, current DPs are often based on a single visual modality (e.g., RGB or point cloud), limiting their accuracy and generalization potential. Although training a generalized DP capable of handling heterogeneous multimodal data would enhance performance, it entails substantial computational and data-related costs. To address these challenges, we propose a novel policy composition method: by leveraging multiple pre-trained DPs based on individual visual modalities, we can combine their distributional scores to form a more expressive Modality-Composable Diffusion Policy (MCDP), without the need for additional training. Through extensive empirical experiments on the RoboTwin dataset, we demonstrate the potential of MCDP to improve both adaptability and performance. This exploration aims to provide valuable insights into the flexible composition of existing DPs, facilitating the development of generalizable cross-modality, cross-domain, and even cross-embodiment policies. Our code is open-sourced at https://github.com/AndyCao1125/MCDP.
Problem

Research questions and friction points this paper is trying to address.

Enhance policy representation using multi-distribution dynamics.
Overcome limitations of single visual modality in Diffusion Policy.
Combine pre-trained DPs for better adaptability and performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines pre-trained diffusion policies without retraining
Enhances adaptability via modality-composable diffusion policy
Improves performance using inference-time distribution-level composition
🔎 Similar Papers
Jiahang Cao
Jiahang Cao
The University of Hong Kong
Robot LearningGenerative ModelsCognitive-inspired Models
Q
Qiang Zhang
The Hong Kong University of Science and Technology (Guangzhou)
Hanzhong Guo
Hanzhong Guo
University of Hong Kong
Diffusion ModelsModel Efficiency
J
Jiaxu Wang
The Hong Kong University of Science and Technology (Guangzhou)
H
Hao Cheng
The Hong Kong University of Science and Technology (Guangzhou)
Renjing Xu
Renjing Xu
HKUST(GZ)
Brain-inspired ComputingHumanoid Computing