🤖 AI Summary
Real-world low-light face images often suffer from multiple degradations, including insufficient illumination, blur, noise, and poor visibility. Existing methods struggle to recover clear facial structures due to either cascaded error accumulation or the absence of explicit facial priors. To address this, this work proposes PASDiff—a training-free, physics-aware semantic diffusion framework that integrates photometric constraints via inverse intensity weighting and Retinex theory to restore natural illumination and color fidelity. Furthermore, a Style-Agnostic Structure Injection (SASI) module is introduced to incorporate external facial structural priors while effectively filtering out their photometric biases. By decoupling and jointly optimizing physical consistency and identity characteristics, PASDiff achieves state-of-the-art performance on the newly curated real-world dataset WildDark-Face, striking an exceptional balance among natural lighting, accurate color reproduction, and identity preservation.
📝 Abstract
Face images captured in real-world low light suffer multiple degradations-low illumination, blur, noise, and low visibility, etc. Existing cascaded solutions often suffer from severe error accumulation, while generic joint models lack explicit facial priors and struggle to resolve clear face structures. In this paper, we propose PASDiff, a Physics-Aware Semantic Diffusion with a training-free manner. To achieve a plausible illumination and color distribution, we leverage inverse intensity weighting and Retinex theory to introduce photometric constraints, thereby reliably recovering visibility and natural chromaticity. To faithfully reconstruct facial details, our Style-Agnostic Structural Injection (SASI) extracts structures from an off-the-shelf facial prior while filtering out its intrinsic photometric biases, seamlessly harmonizing identity features with physical constraints. Furthermore, we construct WildDark-Face, a real-world benchmark of 700 low-light facial images with complex degradations. Extensive experiments demonstrate that PASDiff significantly outperforms existing methods, achieving a superior balance among natural illumination, color recovery, and identity consistency.