From Events to Clarity: The Event-Guided Diffusion Framework for Dehazing

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the ill-posed nature of single-image dehazing under heavy haze—caused by the limited dynamic range of RGB images and consequent loss of structural and illumination details—this work pioneers the integration of event cameras into dehazing. We propose an Event-Guided Diffusion Model (EGDM), which leverages the high dynamic range (HDR) and microsecond temporal resolution of event streams to inject sparse, structure-rich priors into the latent space of a diffusion model. Specifically, we design an event feature extraction and latent-space mapping module to enable effective cross-modal HDR information transfer from events to RGB dehazing. Our key contributions are: (1) the first application of event cameras to image dehazing; and (2) an event-guided mechanism that mitigates semantic drift and enhances visual realism. EGDM achieves state-of-the-art performance on two public benchmarks and a newly constructed heavy-haze UAV dataset (AQI = 341).

Technology Category

Application Category

📝 Abstract
Clear imaging under hazy conditions is a critical task. Prior-based and neural methods have improved results. However, they operate on RGB frames, which suffer from limited dynamic range. Therefore, dehazing remains ill-posed and can erase structure and illumination details. To address this, we use event cameras for dehazing for the extbf{first time}. Event cameras offer much higher HDR ($120 dBvs.60 dB$) and microsecond latency, therefore they suit hazy scenes. In practice, transferring HDR cues from events to frames is hard because real paired data are scarce. To tackle this, we propose an event-guided diffusion model that utilizes the strong generative priors of diffusion models to reconstruct clear images from hazy inputs by effectively transferring HDR information from events. Specifically, we design an event-guided module that maps sparse HDR event features, extit{e.g.,} edges, corners, into the diffusion latent space. This clear conditioning provides precise structural guidance during generation, improves visual realism, and reduces semantic drift. For real-world evaluation, we collect a drone dataset in heavy haze (AQI = 341) with synchronized RGB and event sensors. Experiments on two benchmarks and our dataset achieve state-of-the-art results.
Problem

Research questions and friction points this paper is trying to address.

Dehazing RGB images with limited dynamic range
Transferring HDR information from event cameras to frames
Reconstructing clear images using event-guided diffusion model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event-guided diffusion model for image dehazing
Mapping HDR event features into diffusion latent space
First use of event cameras in dehazing tasks
🔎 Similar Papers
No similar papers found.
L
Ling Wang
The Hong Kong University of Science and Technology (GuangZhou)
Y
Yunfan Lu
The Hong Kong University of Science and Technology (GuangZhou)
W
Wenzong Ma
The Hong Kong University of Science and Technology (GuangZhou)
Huizai Yao
Huizai Yao
HKUST(GZ)
Transfer LearningComputer Vision
Pengteng Li
Pengteng Li
HKUST(GZ) / SZU
MLLMObject DetectionEvent Camera
Hui Xiong
Hui Xiong
Senior Scientist, Candela Corporation
Ultrafast dynamicsatomic molecular physicsfree electron laser