🤖 AI Summary
This work addresses the longstanding challenge in high-fidelity 3D digital human modeling—balancing photorealism with generalization—by proposing the Large-scale Codec Avatar (LCA). LCA introduces, for the first time, a large-model pretraining paradigm to this domain: it first learns appearance and geometry priors from millions of in-the-wild videos, then refines detail fidelity and expressiveness through post-training on high-quality studio-captured data. The resulting model enables efficient feed-forward inference while preserving identity consistency, supporting fine-grained facial expressions, finger-level hand articulation, and loose clothing. Moreover, it exhibits emergent capabilities including relightability, robust generalization across diverse ethnicities, hairstyles, and garments, and zero-shot resilience to stylized inputs.
📝 Abstract
High-quality 3D avatar modeling faces a critical trade-off between fidelity and generalization. On the one hand, multi-view studio data enables high-fidelity modeling of humans with precise control over expressions and poses, but it struggles to generalize to real-world data due to limited scale and the domain gap between the studio environment and the real world. On the other hand, recent large-scale avatar models trained on millions of in-the-wild samples show promise for generalization across a wide range of identities, yet the resulting avatars are often of low-quality due to inherent 3D ambiguities. To address this, we present Large-Scale Codec Avatars (LCA), a high-fidelity, full-body 3D avatar model that generalizes to world-scale populations in a feedforward manner, enabling efficient inference. Inspired by the success of large language models and vision foundation models, we present, for the first time, a pre/post-training paradigm for 3D avatar modeling at scale: we pretrain on 1M in-the-wild videos to learn broad priors over appearance and geometry, then post-train on high-quality curated data to enhance expressivity and fidelity. LCA generalizes across hair styles, clothing, and demographics while providing precise, fine-grained facial expressions and finger-level articulation control, with strong identity preservation. Notably, we observe emergent generalization to relightability and loose garment support to unconstrained inputs, and zero-shot robustness to stylized imagery, despite the absence of direct supervision.