🤖 AI Summary
To address the ill-posed nature of single-image dehazing under heavy haze—caused by the limited dynamic range of RGB images and consequent loss of structural and illumination details—this work pioneers the integration of event cameras into dehazing. We propose an Event-Guided Diffusion Model (EGDM), which leverages the high dynamic range (HDR) and microsecond temporal resolution of event streams to inject sparse, structure-rich priors into the latent space of a diffusion model. Specifically, we design an event feature extraction and latent-space mapping module to enable effective cross-modal HDR information transfer from events to RGB dehazing. Our key contributions are: (1) the first application of event cameras to image dehazing; and (2) an event-guided mechanism that mitigates semantic drift and enhances visual realism. EGDM achieves state-of-the-art performance on two public benchmarks and a newly constructed heavy-haze UAV dataset (AQI = 341).
📝 Abstract
Clear imaging under hazy conditions is a critical task. Prior-based and neural methods have improved results. However, they operate on RGB frames, which suffer from limited dynamic range. Therefore, dehazing remains ill-posed and can erase structure and illumination details. To address this, we use event cameras for dehazing for the extbf{first time}. Event cameras offer much higher HDR ($120 dBvs.60 dB$) and microsecond latency, therefore they suit hazy scenes. In practice, transferring HDR cues from events to frames is hard because real paired data are scarce. To tackle this, we propose an event-guided diffusion model that utilizes the strong generative priors of diffusion models to reconstruct clear images from hazy inputs by effectively transferring HDR information from events. Specifically, we design an event-guided module that maps sparse HDR event features, extit{e.g.,} edges, corners, into the diffusion latent space. This clear conditioning provides precise structural guidance during generation, improves visual realism, and reduces semantic drift. For real-world evaluation, we collect a drone dataset in heavy haze (AQI = 341) with synchronized RGB and event sensors. Experiments on two benchmarks and our dataset achieve state-of-the-art results.