FMGS-Avatar: Mesh-Guided 2D Gaussian Splatting with Foundation Model Priors for 3D Monocular Avatar Reconstruction

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Monocular video-based reconstruction of high-fidelity, animatable human avatars faces two fundamental challenges: geometric under-constrainedness and insufficient representational capacity. To address these, we propose Mesh-Guided 2D Gaussian Splatting—a novel representation that binds 2D Gaussian splats to a deformable template mesh surface, enabling geometry-consistent deformation via mesh guidance. We further introduce a selective gradient isolation mechanism to jointly distill priors from multimodal foundation models (e.g., Sapiens), enhancing semantic coherence and fine-grained detail recovery. Our method integrates differentiable rendering, mesh-constrained deformation, and multi-objective loss optimization. Experiments demonstrate significant improvements over state-of-the-art methods in both geometric accuracy and appearance fidelity. The approach achieves spatiotemporally consistent, high-quality novel-view and novel-pose synthesis, while supporting fine-grained semantic understanding—enabling robust, animatable avatar reconstruction from monocular video alone.

Technology Category

Application Category

📝 Abstract
Reconstructing high-fidelity animatable human avatars from monocular videos remains challenging due to insufficient geometric information in single-view observations. While recent 3D Gaussian Splatting methods have shown promise, they struggle with surface detail preservation due to the free-form nature of 3D Gaussian primitives. To address both the representation limitations and information scarcity, we propose a novel method, extbf{FMGS-Avatar}, that integrates two key innovations. First, we introduce Mesh-Guided 2D Gaussian Splatting, where 2D Gaussian primitives are attached directly to template mesh faces with constrained position, rotation, and movement, enabling superior surface alignment and geometric detail preservation. Second, we leverage foundation models trained on large-scale datasets, such as Sapiens, to complement the limited visual cues from monocular videos. However, when distilling multi-modal prior knowledge from foundation models, conflicting optimization objectives can emerge as different modalities exhibit distinct parameter sensitivities. We address this through a coordinated training strategy with selective gradient isolation, enabling each loss component to optimize its relevant parameters without interference. Through this combination of enhanced representation and coordinated information distillation, our approach significantly advances 3D monocular human avatar reconstruction. Experimental evaluation demonstrates superior reconstruction quality compared to existing methods, with notable gains in geometric accuracy and appearance fidelity while providing rich semantic information. Additionally, the distilled prior knowledge within a shared canonical space naturally enables spatially and temporally consistent rendering under novel views and poses.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing high-fidelity animatable avatars from monocular videos
Addressing surface detail preservation in 3D Gaussian representations
Resolving conflicting optimization objectives from multi-modal foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mesh-guided 2D Gaussian splatting for surface alignment
Foundation model priors from large-scale datasets
Coordinated training with selective gradient isolation
🔎 Similar Papers
2024-07-21IEEE Transactions on Pattern Analysis and Machine IntelligenceCitations: 7
Jinlong Fan
Jinlong Fan
The University of Sydney
computer visionimage rectificationimage processingmachine learning
B
Bingyu Hu
HangZhou Dianzi University, HangZhou, Zhejiang, China
X
Xingguang Li
Shenzhen Polytechnic University, ShenZhen, Guangdong, China
Y
Yuxiang Yang
HangZhou Dianzi University, HangZhou, Zhejiang, China
J
Jing Zhang
WuHan University, WuHan, Hubei, China