🤖 AI Summary
Existing autonomous driving scene reconstruction methods rely on 3D bounding boxes and binary masks, limiting their capacity to model complex geometric structures and multimodal semantics. To address this, we propose a dual-branch conditional diffusion model. First, we introduce Occupancy Ray Sampling (ORS), a novel 3D semantic representation that encodes scenes as semantically enriched voxel rays. Second, we design Semantic Fusion Attention (SFA) to enable precise cross-modal feature alignment between vision and geometry modalities. Third, we incorporate a foreground-aware mask loss (FGM) to improve reconstruction fidelity—particularly for small-scale objects. Evaluated on high-fidelity multi-view driving scene reconstruction, our method achieves state-of-the-art FID scores. Moreover, it consistently outperforms prior works on downstream tasks, including BEV semantic segmentation and 3D object detection.
📝 Abstract
Accurate and high-fidelity driving scene reconstruction relies on fully leveraging scene information as conditioning. However, existing approaches, which primarily use 3D bounding boxes and binary maps for foreground and background control, fall short in capturing the complexity of the scene and integrating multi-modal information. In this paper, we propose DualDiff, a dual-branch conditional diffusion model designed to enhance multi-view driving scene generation. We introduce Occupancy Ray Sampling (ORS), a semantic-rich 3D representation, alongside numerical driving scene representation, for comprehensive foreground and background control. To improve cross-modal information integration, we propose a Semantic Fusion Attention (SFA) mechanism that aligns and fuses features across modalities. Furthermore, we design a foreground-aware masked (FGM) loss to enhance the generation of tiny objects. DualDiff achieves state-of-the-art performance in FID score, as well as consistently better results in downstream BEV segmentation and 3D object detection tasks.