ArchitectHead: Continuous Level of Detail Control for 3D Gaussian Head Avatars

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of supporting continuous Level-of-Detail (LOD) adaptation in 3D Gaussian Splatting (3DGS)-based avatars, this paper proposes the first differentiable, retraining-free continuous LOD control method specifically designed for 3D Gaussian avatars. Our approach implicitly encodes Gaussian parameters into multi-scale learnable feature maps defined over UV space and dynamically decodes them via a lightweight neural network. A UV-feature-field-driven adaptive sampling mechanism enables continuous, resolution-aware adjustment of Gaussian count. At the highest LOD, our method achieves state-of-the-art reconstruction quality; at the lowest LOD, it retains only 6.2% of the Gaussians, yielding nearly 2× rendering speedup, with only a 0.97 dB PSNR drop and a 24.1% LPIPS increase—demonstrating an unprecedented balance between rendering efficiency and visual fidelity.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) has enabled photorealistic and real-time rendering of 3D head avatars. Existing 3DGS-based avatars typically rely on tens of thousands of 3D Gaussian points (Gaussians), with the number of Gaussians fixed after training. However, many practical applications require adjustable levels of detail (LOD) to balance rendering efficiency and visual quality. In this work, we propose "ArchitectHead", the first framework for creating 3D Gaussian head avatars that support continuous control over LOD. Our key idea is to parameterize the Gaussians in a 2D UV feature space and propose a UV feature field composed of multi-level learnable feature maps to encode their latent features. A lightweight neural network-based decoder then transforms these latent features into 3D Gaussian attributes for rendering. ArchitectHead controls the number of Gaussians by dynamically resampling feature maps from the UV feature field at the desired resolutions. This method enables efficient and continuous control of LOD without retraining. Experimental results show that ArchitectHead achieves state-of-the-art (SOTA) quality in self and cross-identity reenactment tasks at the highest LOD, while maintaining near SOTA performance at lower LODs. At the lowest LOD, our method uses only 6.2% of the Gaussians while the quality degrades moderately (L1 Loss +7.9%, PSNR --0.97%, SSIM --0.6%, LPIPS Loss +24.1%), and the rendering speed nearly doubles.
Problem

Research questions and friction points this paper is trying to address.

Enables continuous level of detail control for 3D head avatars
Dynamically adjusts Gaussian count without requiring model retraining
Balances rendering efficiency with visual quality across LODs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameterizes Gaussians in 2D UV feature space
Uses multi-level feature maps for encoding
Dynamically resamples feature maps for LOD control
🔎 Similar Papers
No similar papers found.