POLAR: A Portrait OLAT Dataset and Generative Framework for Illumination-Aware Face Modeling

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Face relighting suffers from a scarcity of large-scale, physically consistent illumination data. To address this, we introduce POLAR—the first large-scale, physically calibrated OLAT (One-Light-At-a-Time) facial dataset, comprising 200+ subjects and 156 illumination directions—and propose POLARNet, a streaming generative model that predicts direction-aware per-light responses from a single input image, enabling identity- and geometry-preserving, controllable relighting. We pioneer a physically interpretable paradigm for continuous lighting transformation modeling, eschewing diffusion priors and background dependency, and establish a closed-loop self-enhancement framework integrating real acquisition, synthetic generation, and physics-based rendering. Our method unifies OLAT capture, multi-view physical calibration, conditional flow matching, implicit illumination response modeling, and identity consistency constraints. POLARNet achieves state-of-the-art performance in cross-illumination reconstruction, zero-shot relighting, and illumination interpolation. The POLAR dataset is publicly released; POLARNet enables real-time, high-fidelity, fine-grained directional relighting.

Technology Category

Application Category

📝 Abstract
Face relighting aims to synthesize realistic portraits under novel illumination while preserving identity and geometry. However, progress remains constrained by the limited availability of large-scale, physically consistent illumination data. To address this, we introduce POLAR, a large-scale and physically calibrated One-Light-at-a-Time (OLAT) dataset containing over 200 subjects captured under 156 lighting directions, multiple views, and diverse expressions. Building upon POLAR, we develop a flow-based generative model POLARNet that predicts per-light OLAT responses from a single portrait, capturing fine-grained and direction-aware illumination effects while preserving facial identity. Unlike diffusion or background-conditioned methods that rely on statistical or contextual cues, our formulation models illumination as a continuous, physically interpretable transformation between lighting states, enabling scalable and controllable relighting. Together, POLAR and POLARNet form a unified illumination learning framework that links real data, generative synthesis, and physically grounded relighting, establishing a self-sustaining "chicken-and-egg" cycle for scalable and reproducible portrait illumination.
Problem

Research questions and friction points this paper is trying to address.

Face relighting lacks large-scale, physically consistent illumination data.
Existing methods rely on statistical cues, not continuous, interpretable lighting transformations.
Need a unified framework linking real data, generative synthesis, and physical relighting.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale OLAT dataset with 200 subjects
Flow-based generative model for per-light responses
Continuous physically interpretable illumination transformation modeling
🔎 Similar Papers
No similar papers found.
Z
Zhuo Chen
Shanghai Jiao Tong University
C
Chengqun Yang
Shanghai Jiao Tong University
Z
Zhuo Su
PICO
Z
Zheng Lv
PICO
Jingnan Gao
Jingnan Gao
Ph.D. student at Shanghai Jiao Tong University
Computer Vision
Xiaoyuan Zhang
Xiaoyuan Zhang
Peking University
Multi-Agent LearningReinforcement Learning
X
Xiaokang Yang
Shanghai Jiao Tong University
Y
Yichao Yan
Shanghai Jiao Tong University