🤖 AI Summary
To address weak controllability, insufficient spatial fine-grainedness, and challenging cross-modal alignment in monocular RGB-to-panoramic LiDAR generation, this paper proposes a conditional diffusion-based controllable generation framework. Our key contributions are: (1) a confidence-aware semantic-depth joint modulation mechanism enabling adaptive multi-cue fusion; (2) geometry-driven cross-modal alignment coupled with panoramic feature consistency constraints to ensure robust 3D structural and global semantic alignment; and (3) a novel cross-modal semantic-depth consistency metric for quantitative evaluation. The method achieves state-of-the-art generation quality on nuScenes, SemanticKITTI, and KITTI-Weather. Generated LiDAR data significantly improves downstream semantic segmentation performance. By enabling high-fidelity, controllable, and geometry-aware LiDAR synthesis from single-view RGB inputs, our approach establishes a new paradigm for low-cost, high-controllability multimodal simulation.
📝 Abstract
Realistic and controllable panoramic LiDAR data generation is critical for scalable 3D perception in autonomous driving and robotics. Existing methods either perform unconditional generation with poor controllability or adopt text-guided synthesis, which lacks fine-grained spatial control. Leveraging a monocular RGB image as a spatial control signal offers a scalable and low-cost alternative, which remains an open problem. However, it faces three core challenges: (i) semantic and depth cues from RGB are vary spatially, complicating reliable conditioning generation; (ii) modality gaps between RGB appearance and LiDAR geometry amplify alignment errors under noisy diffusion; and (iii) maintaining structural coherence between monocular RGB and panoramic LiDAR is challenging, particularly in non-overlap regions between images and LiDAR. To address these challenges, we propose Veila, a novel conditional diffusion framework that integrates: a Confidence-Aware Conditioning Mechanism (CACM) that strengthens RGB conditioning by adaptively balancing semantic and depth cues according to their local reliability; a Geometric Cross-Modal Alignment (GCMA) for robust RGB-LiDAR alignment under noisy diffusion; and a Panoramic Feature Coherence (PFC) for enforcing global structural consistency across monocular RGB and panoramic LiDAR. Additionally, we introduce two metrics, Cross-Modal Semantic Consistency and Cross-Modal Depth Consistency, to evaluate alignment quality across modalities. Experiments on nuScenes, SemanticKITTI, and our proposed KITTI-Weather benchmark demonstrate that Veila achieves state-of-the-art generation fidelity and cross-modal consistency, while enabling generative data augmentation that improves downstream LiDAR semantic segmentation.