Factorizing Diffusion Policies for Observation Modality Prioritization

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models struggle to capture the heterogeneous contributions of multimodal observations—vision, proprioception, and touch—to action generation in embodied multimodal learning. To address this, we propose Factorized Diffusion Policy (FDP), the first approach to introduce a learnable, modality-specific priority mechanism. FDP employs factorized conditional modeling to dynamically assign weights to each modality throughout the denoising process. Built upon an end-to-end trainable diffusion policy framework, it requires neither cross-modal alignment nor pretraining. Experiments demonstrate a 15% improvement in task success rate under low-data regimes and a 40-percentage-point absolute gain in success rate under distribution shifts—e.g., visual occlusion or camera perturbations—significantly enhancing robustness and generalization. Our core contribution lies in formalizing modality importance as a structured prior over the diffusion process, thereby improving the expressivity and adaptability of multimodal perception–action mapping.

Technology Category

Application Category

📝 Abstract
Diffusion models have been extensively leveraged for learning robot skills from demonstrations. These policies are conditioned on several observational modalities such as proprioception, vision and tactile. However, observational modalities have varying levels of influence for different tasks that diffusion polices fail to capture. In this work, we propose 'Factorized Diffusion Policies' abbreviated as FDP, a novel policy formulation that enables observational modalities to have differing influence on the action diffusion process by design. This results in learning policies where certain observations modalities can be prioritized over the others such as $ exttt{vision>tactile}$ or $ exttt{proprioception>vision}$. FDP achieves modality prioritization by factorizing the observational conditioning for diffusion process, resulting in more performant and robust policies. Our factored approach shows strong performance improvements in low-data regimes with $15%$ absolute improvement in success rate on several simulated benchmarks when compared to a standard diffusion policy that jointly conditions on all input modalities. Moreover, our benchmark and real-world experiments show that factored policies are naturally more robust with $40%$ higher absolute success rate across several visuomotor tasks under distribution shifts such as visual distractors or camera occlusions, where existing diffusion policies fail catastrophically. FDP thus offers a safer and more robust alternative to standard diffusion policies for real-world deployment. Videos are available at https://fdp-policy.github.io/fdp-policy/ .
Problem

Research questions and friction points this paper is trying to address.

Diffusion policies fail to prioritize observational modalities differently for tasks
Existing approaches cannot adapt modality influence based on task requirements
Standard diffusion policies lack robustness under distribution shifts and low-data regimes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Factorized observational conditioning for diffusion process
Modality prioritization by design in action diffusion
Improved robustness under distribution shifts
🔎 Similar Papers
No similar papers found.
O
Omkar Patil
School of Computing and Augmented Intelligence, Arizona State Uni.
P
Prabin Rath
School of Computing and Augmented Intelligence, Arizona State Uni.
K
Kartikay Pangaonkar
School of Computing and Augmented Intelligence, Arizona State Uni.
Eric Rosen
Eric Rosen
Boston Dynamics AI Institute
Machine LearningRoboticsMixed Reality
Nakul Gopalan
Nakul Gopalan
Assistant Professor Arizona State University
RoboticsNatural LanguageReinforcement Learning