๐ค AI Summary
This work addresses key challenges in static-image-driven facial animationโcross-subject expression distortion, inadequate modeling of subtle emotions, and difficulty in multi-character coordinated control. We propose an implicit dynamic modeling framework based on a diffusion Transformer. Our method employs expression-augmented learning and masked cross-attention to achieve identity-agnostic, fine-grained emotional rendering and independently controllable multi-character animation generation. Crucially, it eschews explicit geometric priors (e.g., 3DMMs or landmarks), thereby eliminating cross-driving artifacts and feature interference. To support training and evaluation, we introduce Multi-Expr, a novel multi-character expression dataset, and ExprBench, a dedicated benchmark for systematic assessment. Quantitative and qualitative experiments demonstrate that our approach significantly outperforms state-of-the-art methods across all metrics, especially in cross-subject reenactment and multi-character collaborative animation tasks.
๐ Abstract
Producing expressive facial animations from static images is a challenging task. Prior methods relying on explicit geometric priors (e.g., facial landmarks or 3DMM) often suffer from artifacts in cross reenactment and struggle to capture subtle emotions. Furthermore, existing approaches lack support for multi-character animation, as driving features from different individuals frequently interfere with one another, complicating the task. To address these challenges, we propose FantasyPortrait, a diffusion transformer based framework capable of generating high-fidelity and emotion-rich animations for both single- and multi-character scenarios. Our method introduces an expression-augmented learning strategy that utilizes implicit representations to capture identity-agnostic facial dynamics, enhancing the model's ability to render fine-grained emotions. For multi-character control, we design a masked cross-attention mechanism that ensures independent yet coordinated expression generation, effectively preventing feature interference. To advance research in this area, we propose the Multi-Expr dataset and ExprBench, which are specifically designed datasets and benchmarks for training and evaluating multi-character portrait animations. Extensive experiments demonstrate that FantasyPortrait significantly outperforms state-of-the-art methods in both quantitative metrics and qualitative evaluations, excelling particularly in challenging cross reenactment and multi-character contexts. Our project page is https://fantasy-amap.github.io/fantasy-portrait/.