🤖 AI Summary
Nighttime image dehazing faces severe challenges under dense fog and strong halos, where background information is heavily degraded or completely lost. Existing methods suffer from insufficient priors and limited generative capacity, yielding suboptimal results. This paper introduces diffusion models to nighttime dehazing for the first time, proposing a knowledge distillation–based model adaptation framework and a controllable generation mechanism to jointly suppress fog and halos while plausibly reconstructing missing background content. We innovatively design a paired-image guided training strategy that incorporates task-specific priors, enabling user-controllable dehazing intensity while preserving physical consistency. Experiments on real-world nighttime foggy images demonstrate significant visibility improvement, faithful recovery of structural and textural details, reduced hallucination, and balanced trade-offs between visual realism and content fidelity.
📝 Abstract
Nighttime image dehazing is particularly challenging when dense haze and intense glow severely degrade or completely obscure background information. Existing methods often encounter difficulties due to insufficient background priors and limited generative ability, both essential for handling such conditions. In this paper, we introduce BeyondHaze, a generative nighttime dehazing method that not only significantly reduces haze and glow effects but also infers background information in regions where it may be absent. Our approach is developed on two main ideas: gaining strong background priors by adapting image diffusion models to the nighttime dehazing problem, and enhancing generative ability for haze- and glow-obscured scene areas through guided training. Task-specific nighttime dehazing knowledge is distilled into an image diffusion model in a manner that preserves its capacity to generate clean images. The diffusion model is additionally trained on image pairs designed to improve its ability to generate background details and content that are missing in the input image due to haze effects. Since generative models are susceptible to hallucinations, we develop our framework to allow user control over the generative level, balancing visual realism and factual accuracy. Experiments on real-world images demonstrate that BeyondHaze effectively restores visibility in dense nighttime haze.