VO-DP: Semantic-Geometric Adaptive Diffusion Policy for Vision-Only Robotic Manipulation

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Most existing robotic imitation learning approaches rely heavily on point-cloud inputs, with limited exploration of purely vision-based methods. This paper introduces VO-DP: the first single-view image-driven diffusion policy for robotic manipulation that operates without point clouds. VO-DP adaptively fuses semantic features from DINOv2 encodings and geometric features extracted via alternating attention, integrating cross-attention mechanisms with CNN-based spatial compression to construct a robust visual policy network. In simulation, VO-DP achieves a success rate of 64.6%, matching the point-cloud baseline DP3 (64.0%); in real-world experiments, it attains 87.9%, significantly outperforming DP3 (67.5%) and DP (11.2%). To our knowledge, VO-DP is the first method to enable end-to-end co-modeling of semantic and geometric features in pure-vision imitation learning, demonstrating strong environmental robustness. We open-source a scalable robot manipulation library supporting distributed training.

Technology Category

Application Category

📝 Abstract
In the context of imitation learning, visuomotor-based diffusion policy learning is one of the main directions in robotic manipulation. Most of these approaches rely on point clouds as observation inputs and construct scene representations through point clouds feature learning, which enables them to achieve remarkable accuracy. However, the existing literature lacks an in-depth exploration of vision-only solutions that have significant potential. In this paper, we propose a Vision-Only and single-view Diffusion Policy learning method (VO-DP) that leverages pretrained visual foundation models to achieve effective fusion of semantic and geometric features. We utilize intermediate features from VGGT incorporating semantic features from DINOv2 and geometric features from Alternating Attention blocks. Features are fused via cross-attention and spatially compressed with a CNN to form the input to the policy head. Extensive experiments demonstrate that VO-DP not only outperforms the vision-only baseline DP significantly but also exhibits distinct performance trends against the point cloud-based method DP3: in simulation tasks, VO-DP achieves an average success rate of 64.6% on par with DP3 64.0% and far higher than DP 34.8%, while in real-world tasks, it reaches 87.9%, outperforming both DP3 67.5% and DP 11.2% by a notable margin. Further robustness evaluations confirm that VO-DP remains highly stable under varying conditions including color, size, background, and lighting. Lastly, we open-source a training library for robotic manipulation. Built on Accelerate, this library supports multi-machine and multi-GPU parallel training, as well as mixed precision training. It is compatible with visuomotor policies such as DP, DP3 and VO-DP, and also supports the RoboTwin simulator.
Problem

Research questions and friction points this paper is trying to address.

Develops vision-only robotic manipulation without point clouds
Fuses semantic and geometric features using pretrained models
Achieves high success rates in simulation and real-world tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-only diffusion policy with semantic-geometric fusion
Cross-attention fusion of VGGT, DINOv2, and Alternating Attention features
CNN spatial compression for policy input generation
🔎 Similar Papers
No similar papers found.
Z
Zehao Ni
National Key Laboratory of Autonomous Intelligent Unmanned Systems, D-ROBOTICS, Frontiers Science Center for Intelligent Autonomous Systems, Shanghai Institute of Intelligent Science and Technology, Tongji University
Y
Yonghao He
D-ROBOTICS
L
Lingfeng Qian
D-ROBOTICS
J
Jilei Mao
D-ROBOTICS
F
Fa Fu
D-ROBOTICS
Wei Sui
Wei Sui
Horizon Robotics
3D VisionBev Perception3D Reconstruction
H
Hu Su
State Key Laboratory of Multimodal Artificial Intelligence System (MAIS), Institute of Automation of Chinese Academy of Sciences
Junran Peng
Junran Peng
Assosiate Professor of USTB
3D AIGC3D Comprehension and ReconstructionEmbodied AI
Z
Zhipeng Wang
National Key Laboratory of Autonomous Intelligent Unmanned Systems, Frontiers Science Center for Intelligent Autonomous Systems, Shanghai Institute of Intelligent Science and Technology, Tongji University
B
Bin He
National Key Laboratory of Autonomous Intelligent Unmanned Systems, Frontiers Science Center for Intelligent Autonomous Systems, Shanghai Institute of Intelligent Science and Technology, Tongji University