Retrieval-Augmented Gaussian Avatars: Improving Expression Generalization

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of existing template-free animatable face avatars to out-of-distribution expressions, which stems from reliance on sparse expression data of a single identity. To overcome this, we propose Retrieval-Augmented Face modeling (RAF), a method that constructs a large-scale unlabeled expression bank during training and enhances deformation fields by replacing selected original features with retrieved nearest-neighbor expression features while preserving subject identity reconstruction. RAF improves disentanglement between identity and expression without requiring cross-identity paired data, additional annotations, or architectural modifications. Experiments on the NeRSemble benchmark demonstrate that RAF significantly enhances expression fidelity in both self-driven and cross-driven tasks, and user studies confirm that its outputs exhibit greater realism in both expression and pose.

Technology Category

Application Category

📝 Abstract
Template-free animatable head avatars can achieve high visual fidelity by learning expression-dependent facial deformation directly from a subject's capture, avoiding parametric face templates and hand-designed blendshape spaces. However, since learned deformation is supervised only by the expressions observed for a single identity, these models suffer from limited expression coverage and often struggle when driven by motions that deviate from the training distribution. We introduce RAF (Retrieval-Augmented Faces), a simple training-time augmentation designed for template-free head avatars that learn deformation from data. RAF constructs a large unlabeled expression bank and, during training, replaces a subset of the subject's expression features with nearest-neighbor expressions retrieved from this bank while still reconstructing the subject's original frames. This exposes the deformation field to a broader range of expression conditions, encouraging stronger identity-expression decoupling and improving robustness to expression distribution shift without requiring paired cross-identity data, additional annotations, or architectural changes. We further analyze how retrieval augmentation increases expression diversity and validate retrieval quality with a user study showing that retrieved neighbors are perceptually closer in expression and pose. Experiments on the NeRSemble benchmark demonstrate that RAF consistently improves expression fidelity over the baseline, in both self-driving and cross-driving scenarios.
Problem

Research questions and friction points this paper is trying to address.

expression generalization
animatable avatars
facial deformation
distribution shift
template-free
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-Augmentation
Template-free Avatars
Expression Generalization
Identity-Expression Decoupling
Neural Head Avatars
🔎 Similar Papers
No similar papers found.