🤖 AI Summary
Monocular video-based reconstruction of high-fidelity, animatable human avatars faces two fundamental challenges: geometric under-constrainedness and insufficient representational capacity. To address these, we propose Mesh-Guided 2D Gaussian Splatting—a novel representation that binds 2D Gaussian splats to a deformable template mesh surface, enabling geometry-consistent deformation via mesh guidance. We further introduce a selective gradient isolation mechanism to jointly distill priors from multimodal foundation models (e.g., Sapiens), enhancing semantic coherence and fine-grained detail recovery. Our method integrates differentiable rendering, mesh-constrained deformation, and multi-objective loss optimization. Experiments demonstrate significant improvements over state-of-the-art methods in both geometric accuracy and appearance fidelity. The approach achieves spatiotemporally consistent, high-quality novel-view and novel-pose synthesis, while supporting fine-grained semantic understanding—enabling robust, animatable avatar reconstruction from monocular video alone.
📝 Abstract
Reconstructing high-fidelity animatable human avatars from monocular videos remains challenging due to insufficient geometric information in single-view observations. While recent 3D Gaussian Splatting methods have shown promise, they struggle with surface detail preservation due to the free-form nature of 3D Gaussian primitives. To address both the representation limitations and information scarcity, we propose a novel method, extbf{FMGS-Avatar}, that integrates two key innovations. First, we introduce Mesh-Guided 2D Gaussian Splatting, where 2D Gaussian primitives are attached directly to template mesh faces with constrained position, rotation, and movement, enabling superior surface alignment and geometric detail preservation. Second, we leverage foundation models trained on large-scale datasets, such as Sapiens, to complement the limited visual cues from monocular videos. However, when distilling multi-modal prior knowledge from foundation models, conflicting optimization objectives can emerge as different modalities exhibit distinct parameter sensitivities. We address this through a coordinated training strategy with selective gradient isolation, enabling each loss component to optimize its relevant parameters without interference. Through this combination of enhanced representation and coordinated information distillation, our approach significantly advances 3D monocular human avatar reconstruction. Experimental evaluation demonstrates superior reconstruction quality compared to existing methods, with notable gains in geometric accuracy and appearance fidelity while providing rich semantic information. Additionally, the distilled prior knowledge within a shared canonical space naturally enables spatially and temporally consistent rendering under novel views and poses.