🤖 AI Summary
Face relighting suffers from a scarcity of large-scale, physically consistent illumination data. To address this, we introduce POLAR—the first large-scale, physically calibrated OLAT (One-Light-At-a-Time) facial dataset, comprising 200+ subjects and 156 illumination directions—and propose POLARNet, a streaming generative model that predicts direction-aware per-light responses from a single input image, enabling identity- and geometry-preserving, controllable relighting. We pioneer a physically interpretable paradigm for continuous lighting transformation modeling, eschewing diffusion priors and background dependency, and establish a closed-loop self-enhancement framework integrating real acquisition, synthetic generation, and physics-based rendering. Our method unifies OLAT capture, multi-view physical calibration, conditional flow matching, implicit illumination response modeling, and identity consistency constraints. POLARNet achieves state-of-the-art performance in cross-illumination reconstruction, zero-shot relighting, and illumination interpolation. The POLAR dataset is publicly released; POLARNet enables real-time, high-fidelity, fine-grained directional relighting.
📝 Abstract
Face relighting aims to synthesize realistic portraits under novel illumination while preserving identity and geometry. However, progress remains constrained by the limited availability of large-scale, physically consistent illumination data. To address this, we introduce POLAR, a large-scale and physically calibrated One-Light-at-a-Time (OLAT) dataset containing over 200 subjects captured under 156 lighting directions, multiple views, and diverse expressions. Building upon POLAR, we develop a flow-based generative model POLARNet that predicts per-light OLAT responses from a single portrait, capturing fine-grained and direction-aware illumination effects while preserving facial identity. Unlike diffusion or background-conditioned methods that rely on statistical or contextual cues, our formulation models illumination as a continuous, physically interpretable transformation between lighting states, enabling scalable and controllable relighting. Together, POLAR and POLARNet form a unified illumination learning framework that links real data, generative synthesis, and physically grounded relighting, establishing a self-sustaining "chicken-and-egg" cycle for scalable and reproducible portrait illumination.