HiLLIE: Human-in-the-Loop Training for Low-Light Image Enhancement

📅 2025-05-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In low-light image enhancement (LLIE), unsupervised methods often fail to align with human visual preferences. To address this, we propose a human-in-the-loop iterative training framework that incorporates lightweight human visual feedback—specifically, minimal pairwise image quality rankings—as supervisory signals, thereby embedding perceptual guidance into the unsupervised LLIE training loop for the first time. Our key contributions are: (1) an evolvable, preference-driven image quality assessment (IQA) model enabling fine-grained perceptual modeling; and (2) a ranking-based iterative optimization mechanism for generative models. Evaluated across multiple benchmarks, our framework consistently improves both objective metrics and human-perceived quality of diverse unsupervised LLIE models. It generates enhanced images with superior brightness, fidelity, and perceptual naturalness, outperforming existing state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Developing effective approaches to generate enhanced results that align well with human visual preferences for high-quality well-lit images remains a challenge in low-light image enhancement (LLIE). In this paper, we propose a human-in-the-loop LLIE training framework that improves the visual quality of unsupervised LLIE model outputs through iterative training stages, named HiLLIE. At each stage, we introduce human guidance into the training process through efficient visual quality annotations of enhanced outputs. Subsequently, we employ a tailored image quality assessment (IQA) model to learn human visual preferences encoded in the acquired labels, which is then utilized to guide the training process of an enhancement model. With only a small amount of pairwise ranking annotations required at each stage, our approach continually improves the IQA model's capability to simulate human visual assessment of enhanced outputs, thus leading to visually appealing LLIE results. Extensive experiments demonstrate that our approach significantly improves unsupervised LLIE model performance in terms of both quantitative and qualitative performance. The code and collected ranking dataset will be available at https://github.com/LabShuHangGU/HiLLIE.
Problem

Research questions and friction points this paper is trying to address.

Aligning low-light image enhancement with human visual preferences
Improving unsupervised LLIE model outputs via human-in-the-loop training
Learning human visual preferences through iterative quality annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-in-the-loop training for LLIE
Iterative human-guided quality annotations
IQA model learns human visual preferences
🔎 Similar Papers
No similar papers found.