Light Up Your Face: A Physically Consistent Dataset and Diffusion Model for Face Fill-Light Enhancement

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of foreground-background illumination inconsistency caused by global relighting in face exposure enhancement. To tackle this, the authors propose FiLitDiff, a method that selectively enhances only underexposed facial regions while preserving the original scene lighting. The approach leverages LYF-160K, the first large-scale physically consistent paired dataset for this task, and introduces a physics-aware illumination prompt (PALP) integrated within a single-step conditional diffusion model. By exploiting six-dimensional disentangled illumination parameters, FiLitDiff achieves high-fidelity, controllable, and background-compatible relighting. Extensive experiments demonstrate that the proposed method significantly outperforms existing approaches in both full-reference metrics and perceptual quality assessments, confirming its effectiveness and practical utility.

Technology Category

Application Category

📝 Abstract
Face fill-light enhancement (FFE) brightens underexposed faces by adding virtual fill light while keeping the original scene illumination and background unchanged. Most face relighting methods aim to reshape overall lighting, which can suppress the input illumination or modify the entire scene, leading to foreground-background inconsistency and mismatching practical FFE needs. To support scalable learning, we introduce LightYourFace-160K (LYF-160K), a large-scale paired dataset built with a physically consistent renderer that injects a disk-shaped area fill light controlled by six disentangled factors, producing 160K before-and-after pairs. We first pretrain a physics-aware lighting prompt (PALP) that embeds the 6D parameters into conditioning tokens, using an auxiliary planar-light reconstruction objective. Building on a pretrained diffusion backbone, we then train a fill-light diffusion (FiLitDiff), an efficient one-step model conditioned on physically grounded lighting codes, enabling controllable and high-fidelity fill lighting at low computational cost. Experiments on held-out paired sets demonstrate strong perceptual quality and competitive full-reference metrics, while better preserving background illumination. The dataset and model will be at https://github.com/gobunu/Light-Up-Your-Face.
Problem

Research questions and friction points this paper is trying to address.

face fill-light enhancement
foreground-background inconsistency
physically consistent relighting
virtual fill light
scene illumination preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

face fill-light enhancement
physically consistent rendering
diffusion model
lighting disentanglement
one-step relighting
🔎 Similar Papers
No similar papers found.