๐ค AI Summary
This work addresses key challenges in single-image-driven 3D virtual human generationโnamely, appearance distortion, multi-view inconsistency, and temporal discontinuity. We propose the first unified framework integrating regression-based 3D human reconstruction with video diffusion modeling. Methodologically, we leverage SMPL-X parameters as geometric priors and introduce dense geometric driving signals for geometry-conditioned modulation, while employing decoupled rendering to enhance appearance fidelity and cross-pose/cross-view generalization. Compared to prior approaches, our method achieves significant improvements in novel-view synthesis quality and non-rigid animation naturalness on both in-domain and out-of-domain real-world video data. To our knowledge, this is the first method enabling high-fidelity, view-consistent, and temporally coherent 3D virtual human generation from a single input image.
๐ Abstract
We introduce a generalizable and unified framework to synthesize view-consistent and temporally coherent avatars from a single image, addressing the challenging problem of single-image avatar generation. While recent methods employ diffusion models conditioned on human templates like depth or normal maps, they often struggle to preserve appearance information due to the discrepancy between sparse driving signals and the actual human subject, resulting in multi-view and temporal inconsistencies. Our approach bridges this gap by combining the reconstruction power of regression-based 3D human reconstruction with the generative capabilities of a diffusion model. The dense driving signal from the initial reconstructed human provides comprehensive conditioning, ensuring high-quality synthesis faithful to the reference appearance and structure. Additionally, we propose a unified framework that enables the generalization learned from novel pose synthesis on in-the-wild videos to naturally transfer to novel view synthesis. Our video-based diffusion model enhances disentangled synthesis with high-quality view-consistent renderings for novel views and realistic non-rigid deformations in novel pose animation. Results demonstrate the superior generalization ability of our method across in-domain and out-of-domain in-the-wild datasets. Project page: https://humansensinglab.github.io/GAS/