Event-Based De-Snowing for Autonomous Driving

πŸ“… 2025-07-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Snowfall severely degrades visual perception for autonomous driving, yet existing image- or video-based deraining methods suffer from hallucination artifacts, misalignment distortions, and limited generalization. To address this, we propose the first event-camera-based snow removal method: leveraging the distinctive spatiotemporal stripe patterns induced by snow occlusions in event streams, we design an attention-driven neural network module that precisely localizes occluded regions and recovers underlying background intensity. To enable training and benchmarking, we introduce DSEC-Snowβ€”the first synchronized event-image snow dataset. Experiments demonstrate that our method achieves a 3 dB PSNR improvement over state-of-the-art approaches. Critically, the reconstructed images are directly usable for downstream vision tasks: depth estimation and optical flow computation improve by 20%, significantly enhancing robustness and reliability of visual perception under adverse weather conditions.

Technology Category

Application Category

πŸ“ Abstract
Adverse weather conditions, particularly heavy snowfall, pose significant challenges to both human drivers and autonomous vehicles. Traditional image-based de-snowing methods often introduce hallucination artifacts as they rely solely on spatial information, while video-based approaches require high frame rates and suffer from alignment artifacts at lower frame rates. Camera parameters, such as exposure time, also influence the appearance of snowflakes, making the problem difficult to solve and heavily dependent on network generalization. In this paper, we propose to address the challenge of desnowing by using event cameras, which offer compressed visual information with submillisecond latency, making them ideal for de-snowing images, even in the presence of ego-motion. Our method leverages the fact that snowflake occlusions appear with a very distinctive streak signature in the spatio-temporal representation of event data. We design an attention-based module that focuses on events along these streaks to determine when a background point was occluded and use this information to recover its original intensity. We benchmark our method on DSEC-Snow, a new dataset created using a green-screen technique that overlays pre-recorded snowfall data onto the existing DSEC driving dataset, resulting in precise ground truth and synchronized image and event streams. Our approach outperforms state-of-the-art de-snowing methods by 3 dB in PSNR for image reconstruction. Moreover, we show that off-the-shelf computer vision algorithms can be applied to our reconstructions for tasks such as depth estimation and optical flow, achieving a $20%$ performance improvement over other de-snowing methods. Our work represents a crucial step towards enhancing the reliability and safety of vision systems in challenging winter conditions, paving the way for more robust, all-weather-capable applications.
Problem

Research questions and friction points this paper is trying to address.

De-snowing images for autonomous driving in heavy snowfall
Overcoming artifacts in traditional image and video de-snowing methods
Improving vision system reliability in adverse winter conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses event cameras for submillisecond latency de-snowing
Leverages snowflake streak signatures in event data
Attention-based module recovers original image intensity
πŸ”Ž Similar Papers
No similar papers found.