Generalizable and Relightable Gaussian Splatting for Human Novel View Synthesis

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of high-fidelity novel-view synthesis, cross-illumination generalization, and relighting for human avatars. We propose the first end-to-end feedforward 3D Gaussian framework. Methodologically: (1) We introduce Illumination-Aware Geometry Refinement (LGR), jointly optimizing geometry and material to enhance structural consistency under diverse lighting; (2) We propose Physics-Guided Neural Rendering (PGNR), explicitly modeling direct, ambient, and indirect illumination to enable real-time shadowing and photorealistic relighting; (3) We design a differentiable 2D-to-3D projection training paradigm that bypasses ray tracing, enabling joint supervision of geometry, material, and illumination cues. Evaluated under cross-subject and cross-illumination settings, our method significantly improves geometric consistency, visual fidelity, and relighting realism, supports high-quality real-time editing, and outperforms state-of-the-art methods across multiple quantitative metrics.

Technology Category

Application Category

📝 Abstract
We propose GRGS, a generalizable and relightable 3D Gaussian framework for high-fidelity human novel view synthesis under diverse lighting conditions. Unlike existing methods that rely on per-character optimization or ignore physical constraints, GRGS adopts a feed-forward, fully supervised strategy that projects geometry, material, and illumination cues from multi-view 2D observations into 3D Gaussian representations. Specifically, to reconstruct lighting-invariant geometry, we introduce a Lighting-aware Geometry Refinement (LGR) module trained on synthetically relit data to predict accurate depth and surface normals. Based on the high-quality geometry, a Physically Grounded Neural Rendering (PGNR) module is further proposed to integrate neural prediction with physics-based shading, supporting editable relighting with shadows and indirect illumination. Besides, we design a 2D-to-3D projection training scheme that leverages differentiable supervision from ambient occlusion, direct, and indirect lighting maps, which alleviates the computational cost of explicit ray tracing. Extensive experiments demonstrate that GRGS achieves superior visual quality, geometric consistency, and generalization across characters and lighting conditions.
Problem

Research questions and friction points this paper is trying to address.

High-fidelity human novel view synthesis under diverse lighting conditions
Reconstruct lighting-invariant geometry with accurate depth and normals
Editable relighting with shadows and indirect illumination integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lighting-aware Geometry Refinement module
Physically Grounded Neural Rendering
2D-to-3D projection training scheme
🔎 Similar Papers
No similar papers found.