🤖 AI Summary
This work addresses the severe degradation of LiDAR perception in snowy conditions, which poses a critical threat to the reliability of autonomous driving systems. To tackle this challenge, the authors propose LIORNet, a self-supervised snow removal framework that, for the first time, integrates range, intensity, and learned strategies without requiring manual annotations. By leveraging multi-source physical priors—including snow reflectance characteristics, point cloud sparsity, and ranging constraints—the method generates high-quality pseudo-labels. Built upon a U-Net++ architecture, LIORNet significantly outperforms existing approaches on the WADS and CADC datasets, achieving state-of-the-art performance in denoising accuracy, inference speed, and preservation of critical structural details. The proposed framework thus enhances both the generalization capability and practical applicability of LiDAR-based perception under adverse weather conditions.
📝 Abstract
LiDAR sensors provide high-resolution 3D perception and long-range detection, making them indispensable for autonomous driving and robotics. However, their performance significantly degrades under adverse weather conditions such as snow, rain, and fog, where spurious noise points dominate the point cloud and lead to false perception. To address this problem, various approaches have been proposed: distance-based filters exploiting spatial sparsity, intensity-based filters leveraging reflectance distributions, and learning-based methods that adapt to complex environments. Nevertheless, distance-based methods struggle to distinguish valid object points from noise, intensity-based methods often rely on fixed thresholds that lack adaptability to changing conditions, and learning-based methods suffer from the high cost of annotation, limited generalization, and computational overhead. In this study, we propose LIORNet, which eliminates these drawbacks and integrates the strengths of all three paradigms. LIORNet is built upon a U-Net++ backbone and employs a self-supervised learning strategy guided by pseudo-labels generated from multiple physical and statistical cues, including range-dependent intensity thresholds, snow reflectivity, point sparsity, and sensing range constraints. This design enables LIORNet to distinguish noise points from environmental structures without requiring manual annotations, thereby overcoming the difficulty of snow labeling and the limitations of single-principle approaches. Extensive experiments on the WADS and CADC datasets demonstrate that LIORNet outperforms state-of-the-art filtering algorithms in both accuracy and runtime while preserving critical environmental features. These results highlight LIORNet as a practical and robust solution for LiDAR perception in extreme weather, with strong potential for real-time deployment in autonomous driving systems.