🤖 AI Summary
This work proposes a feedforward animatable Gaussian avatar model that overcomes the limitations of existing digital human head modeling approaches, which typically rely on multi-view capture and time-consuming optimization, making it difficult to generate drivable avatars rapidly from arbitrary unposed images. By introducing UV-guided modeling and a learnable UV token mechanism, the method establishes consistent correspondences between cross-view pixels and UV space. A joint attention mechanism operating in both screen and UV domains enables flexible input modalities—including a single image, multiple unposed views, or casually captured smartphone videos. Trained with UV-space reprojection, a Gaussian attribute decoder, and a large-scale synthetic identity dataset, the approach significantly outperforms state-of-the-art methods under both monocular and multi-view settings, achieving high-quality, fast, and generalizable facial animation.
📝 Abstract
We present UIKA, a feed-forward animatable Gaussian head model from an arbitrary number of unposed inputs, including a single image, multi-view captures, and smartphone-captured videos. Unlike the traditional avatar method, which requires a studio-level multi-view capture system and reconstructs a human-specific model through a long-time optimization process, we rethink the task through the lenses of model representation, network design, and data preparation. First, we introduce a UV-guided avatar modeling strategy, in which each input image is associated with a pixel-wise facial correspondence estimation. Such correspondence estimation allows us to reproject each valid pixel color from screen space to UV space, which is independent of camera pose and character expression. Furthermore, we design learnable UV tokens on which the attention mechanism can be applied at both the screen and UV levels. The learned UV tokens can be decoded into canonical Gaussian attributes using aggregated UV information from all input views. To train our large avatar model, we additionally prepare a large-scale, identity-rich synthetic training dataset. Our method significantly outperforms existing approaches in both monocular and multi-view settings. See more details in our project page: https://zijian-wu.github.io/uika-page/