🤖 AI Summary
Existing low-light enhancement (LLE) methods largely neglect geometric priors, limiting their capacity to model the underlying physical scene structure. To address this, we propose GG-LLERF, the first depth-driven fine-grained enhancement framework that systematically incorporates depth information as geometric guidance. Our key contributions are: (1) a depth-aware feature extraction module; (2) a hierarchical depth-guided feature fusion module (HDGFFM); and (3) a cross-domain attention mechanism for joint geometric-appearance modeling. All components are end-to-end differentiable and trainable. Extensive experiments on mainstream low-light image and video benchmarks demonstrate that GG-LLERF consistently outperforms state-of-the-art methods, achieving significant gains in PSNR and SSIM. Ablation studies empirically validate that explicit geometric priors—particularly depth—deliver substantial improvements in illumination recovery, confirming their critical role in low-light enhancement.
📝 Abstract
Low-Light Enhancement (LLE) is aimed at improving the quality of photos/videos captured under low-light conditions. It is worth noting that most existing LLE methods do not take advantage of geometric modeling. We believe that incorporating geometric information can enhance LLE performance, as it provides insights into the physical structure of the scene that influences illumination conditions. To address this, we propose a Geometry-Guided Low-Light Enhancement Refine Framework (GG-LLERF) designed to assist low-light enhancement models in learning improved features for LLE by integrating geometric priors into the feature representation space. In this paper, we employ depth priors as the geometric representation. Our approach focuses on the integration of depth priors into various LLE frameworks using a unified methodology. This methodology comprises two key novel modules. First, a depth-aware feature extraction module is designed to inject depth priors into the image representation. Then, Hierarchical Depth-Guided Feature Fusion Module (HDGFFM) is formulated with a cross-domain attention mechanism, which combines depth-aware features with the original image features within the LLE model. We conducted extensive experiments on public low-light image and video enhancement benchmarks. The results illustrate that our designed framework significantly enhances existing LLE methods.