RelightAnyone: A Generalized Relightable 3D Gaussian Head Model

๐Ÿ“… 2026-01-06
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of achieving high-quality relighting for arbitrary 3D Gaussian avatars without requiring complex illumination capture setups such as OLAT (One Light At a Time). The authors propose a two-stage approach: first, constructing a 3D Gaussian avatar from ordinary multi-view images without any OLAT dependency, and then mapping it via self-supervised learning to a physically plausible reflectance representation suitable for relighting. This method is the first to enable cross-subject generalizable relighting of 3D Gaussian avatars, capable of fitting a new individual from a single image. By integrating 3D Gaussian splatting, a physics-based rendering model, and a two-stage training strategy, the approach supports photorealistic relighting and novel view synthesis under arbitrary lighting conditions. Experiments demonstrate significant improvements over existing methods in both visual realism and generalization capability.

Technology Category

Application Category

๐Ÿ“ Abstract
3D Gaussian Splatting (3DGS) has become a standard approach to reconstruct and render photorealistic 3D head avatars. A major challenge is to relight the avatars to match any scene illumination. For high quality relighting, existing methods require subjects to be captured under complex time-multiplexed illumination, such as one-light-at-a-time (OLAT). We propose a new generalized relightable 3D Gaussian head model that can relight any subject observed in a single- or multi-view images without requiring OLAT data for that subject. Our core idea is to learn a mapping from flat-lit 3DGS avatars to corresponding relightable Gaussian parameters for that avatar. Our model consists of two stages: a first stage that models flat-lit 3DGS avatars without OLAT lighting, and a second stage that learns the mapping to physically-based reflectance parameters for high-quality relighting. This two-stage design allows us to train the first stage across diverse existing multi-view datasets without OLAT lighting ensuring cross-subject generalization, where we learn a dataset-specific lighting code for self-supervised lighting alignment. Subsequently, the second stage can be trained on a significantly smaller dataset of subjects captured under OLAT illumination. Together, this allows our method to generalize well and relight any subject from the first stage as if we had captured them under OLAT lighting. Furthermore, we can fit our model to unseen subjects from as little as a single image, allowing several applications in novel view synthesis and relighting for digital avatars.
Problem

Research questions and friction points this paper is trying to address.

relighting
3D Gaussian Splatting
avatar
illumination
generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Relightable 3D Gaussian
Generalized Avatar
Two-stage Relighting
Self-supervised Lighting Alignment
Single-image Fitting
๐Ÿ”Ž Similar Papers
No similar papers found.