DualDiff: Dual-branch Diffusion Model for Autonomous Driving with Semantic Fusion

📅 2025-05-03
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing autonomous driving scene reconstruction methods rely on 3D bounding boxes and binary masks, limiting their capacity to model complex geometric structures and multimodal semantics. To address this, we propose a dual-branch conditional diffusion model. First, we introduce Occupancy Ray Sampling (ORS), a novel 3D semantic representation that encodes scenes as semantically enriched voxel rays. Second, we design Semantic Fusion Attention (SFA) to enable precise cross-modal feature alignment between vision and geometry modalities. Third, we incorporate a foreground-aware mask loss (FGM) to improve reconstruction fidelity—particularly for small-scale objects. Evaluated on high-fidelity multi-view driving scene reconstruction, our method achieves state-of-the-art FID scores. Moreover, it consistently outperforms prior works on downstream tasks, including BEV semantic segmentation and 3D object detection.

Technology Category

Application Category

📝 Abstract
Accurate and high-fidelity driving scene reconstruction relies on fully leveraging scene information as conditioning. However, existing approaches, which primarily use 3D bounding boxes and binary maps for foreground and background control, fall short in capturing the complexity of the scene and integrating multi-modal information. In this paper, we propose DualDiff, a dual-branch conditional diffusion model designed to enhance multi-view driving scene generation. We introduce Occupancy Ray Sampling (ORS), a semantic-rich 3D representation, alongside numerical driving scene representation, for comprehensive foreground and background control. To improve cross-modal information integration, we propose a Semantic Fusion Attention (SFA) mechanism that aligns and fuses features across modalities. Furthermore, we design a foreground-aware masked (FGM) loss to enhance the generation of tiny objects. DualDiff achieves state-of-the-art performance in FID score, as well as consistently better results in downstream BEV segmentation and 3D object detection tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhance multi-view driving scene generation accuracy
Improve cross-modal information integration in scenes
Boost tiny object generation in driving scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-branch diffusion model for scene generation
Semantic Fusion Attention for cross-modal fusion
Foreground-aware masked loss for tiny objects
🔎 Similar Papers
No similar papers found.
H
Haoteng Li
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, Shaanxi, 710049, China
Z
Zhao Yang
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, Shaanxi, 710049, China
Zezhong Qian
Zezhong Qian
XianJiaotongUniversity
World ModelAutonomous DrivingVideo GenerationRobot Manipulation
G
Gongpeng Zhao
University of Science and Technology of China, Hefei, Anhui, 230026, China
Y
Yuqi Huang
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, Shaanxi, 710049, China
J
Jun Yu
University of Science and Technology of China, Hefei, Anhui, 230026, China
H
Huazheng Zhou
National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an, Shaanxi, 710049, China
Longjun Liu
Longjun Liu
Xi'an Jiaotong University
Computer ArchitectureVLSIDeep learningDNN accelerator