Visual-Geometry Diffusion Policy: Robust Generalization via Complementarity-Aware Multimodal Fusion

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing imitation learning methods exhibit weak generalization under visual and spatial randomization and struggle to effectively fuse RGB and point cloud modalities. To address this, we propose a vision-geometry complementary fusion framework for robust visuomotor skill learning. Our method introduces a modality dropout mechanism to enforce balanced utilization of both modalities; theoretically demonstrates that cross-modal attention requires only lightweight interaction, with core robustness arising from latent-space modeling under complementarity constraints; and integrates a multimodal diffusion policy network for end-to-end training. Evaluated on 18 simulated and 4 real-world tasks, our approach achieves an average performance improvement of 39.1%, with gains of 41.5% under visual perturbations and 15.2% under spatial perturbations. The framework significantly enhances policy generalization and robustness across diverse environmental variations.

Technology Category

Application Category

📝 Abstract
Imitation learning has emerged as a crucial ap proach for acquiring visuomotor skills from demonstrations, where designing effective observation encoders is essential for policy generalization. However, existing methods often struggle to generalize under spatial and visual randomizations, instead tending to overfit. To address this challenge, we propose Visual Geometry Diffusion Policy (VGDP), a multimodal imitation learning framework built around a Complementarity-Aware Fusion Module where modality-wise dropout enforces balanced use of RGB and point-cloud cues, with cross-attention serving only as a lightweight interaction layer. Our experiments show that the expressiveness of the fused latent space is largely induced by the enforced complementarity from modality-wise dropout, with cross-attention serving primarily as a lightweight interaction mechanism rather than the main source of robustness. Across a benchmark of 18 simulated tasks and 4 real-world tasks, VGDP outperforms seven baseline policies with an average performance improvement of 39.1%. More importantly, VGDP demonstrates strong robustness under visual and spatial per turbations, surpassing baselines with an average improvement of 41.5% in different visual conditions and 15.2% in different spatial settings.
Problem

Research questions and friction points this paper is trying to address.

Addresses poor generalization in imitation learning under spatial and visual variations
Proposes a multimodal fusion method to balance RGB and point-cloud data usage
Enhances robustness to visual and spatial perturbations in visuomotor tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal fusion with modality-wise dropout for balanced cue usage
Cross-attention as lightweight interaction layer in fusion module
Complementarity-aware fusion to enhance robustness and generalization
🔎 Similar Papers
No similar papers found.
Y
Yikai Tang
University of California, Berkeley
Haoran Geng
Haoran Geng
PhD Student, UC Berkeley
RoboticsComputer VisionReinforcement Learning
S
Sheng Zang
Nanyang Technological University
Pieter Abbeel
Pieter Abbeel
UC Berkeley | Covariant
RoboticsMachine LearningAI
J
Jitendra Malik
University of California, Berkeley