🤖 AI Summary
To address degraded autonomous driving perception performance caused by LiDAR data scarcity under adverse weather conditions, this paper proposes a latent diffusion model (LDM)-based method for synthesizing weather-corrupted LiDAR point clouds. We are the first to couple an LDM with a point cloud autoencoder and introduce a clear-weather-scene-guided, geometry-aware post-processing mechanism to ensure physically consistent and geometrically realistic synthesis of rain- and fog-corrupted point clouds. Furthermore, we incorporate LiDAR-domain-specific noise modeling to significantly enhance generation fidelity. Evaluated on SemanticKITTI and ACDC benchmarks, our method achieves downstream semantic segmentation mIoU only 3.2% lower than that obtained using real adverse-weather scans—outperforming GAN-based baselines by 9.7% and demonstrating substantially improved generalization across weather conditions.
📝 Abstract
LiDAR scenes constitute a fundamental source for several autonomous driving applications. Despite the existence of several datasets, scenes from adverse weather conditions are rarely available. This limits the robustness of downstream machine learning models, and restrains the reliability of autonomous driving systems in particular locations and seasons. Collecting feature-diverse scenes under adverse weather conditions is challenging due to seasonal limitations. Generative models are therefore essentials, especially for generating adverse weather conditions for specific driving scenarios. In our work, we propose a latent diffusion process constituted by autoencoder and latent diffusion models. Moreover, we leverage the clear condition LiDAR scenes with a postprocessing step to improve the realism of the generated adverse weather condition scenes.