Zero-Shot Low Light Image Enhancement with Diffusion Prior

📅 2024-12-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses low-light image enhancement (LLIE), proposing a zero-shot method that requires no training, fine-tuning, text conditioning, or hyperparameter optimization. Methodologically, it pioneers the use of frozen, pre-trained text-to-image diffusion models (e.g., Stable Diffusion) as universal visual priors. By leveraging latent-space feature guidance, reparameterizing the reverse denoising process, and performing unsupervised gradient mapping, the approach achieves end-to-end illumination restoration—eliminating handcrafted constraints and iterative optimization for true plug-and-play deployment. Experimentally, it surpasses state-of-the-art (SOTA) methods across multiple benchmarks, significantly improving brightness consistency, color fidelity, and white balance accuracy. Notably, on the automatic white balance (AWB) subtask, it attains performance comparable to dedicated SOTA AWB methods.

Technology Category

Application Category

📝 Abstract
In this paper, we present a simple yet highly effective ``free lunch'' solution for low-light image enhancement (LLIE), which aims to restore low-light images as if acquired in well-illuminated environments. Our method necessitates no training, fine-tuning, text conditioning, or hyperparameter adjustments, yet it consistently reconstructs low-light images with superior fidelity. Specifically, we leverage a pre-trained text-to-image diffusion prior, learned from training on a large collection of natural images, and the features present in the model itself to guide the inference, in contrast to existing methods that depend on customized constraints. Comprehensive quantitative evaluations demonstrate that our approach outperforms SOTA methods on established datasets, while qualitative analyses indicate enhanced color accuracy and the rectification of subtle chromatic deviations. Furthermore, additional experiments reveal that our method, without any modifications, achieves SOTA-comparable performance in the auto white balance (AWB) task.
Problem

Research questions and friction points this paper is trying to address.

Restores low-light images to well-illuminated quality
Uses pre-trained diffusion prior without training or fine-tuning
Outperforms SOTA methods in image enhancement and auto white balance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pre-trained diffusion model for image enhancement
No training or hyperparameter tuning required
Achieves superior fidelity and color accuracy
🔎 Similar Papers
No similar papers found.
J
Joshua Cho
University of Illinois Urbana-Champaign
S
Sara Aghajanzadeh
University of Illinois Urbana-Champaign
Zhen Zhu
Zhen Zhu
University of Illinois at Urbana-Champaign
Computer VisionDeep Learning
D
D. A. Forsyth
University of Illinois Urbana-Champaign