FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers

๐Ÿ“… 2025-07-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses key challenges in static-image-driven facial animationโ€”cross-subject expression distortion, inadequate modeling of subtle emotions, and difficulty in multi-character coordinated control. We propose an implicit dynamic modeling framework based on a diffusion Transformer. Our method employs expression-augmented learning and masked cross-attention to achieve identity-agnostic, fine-grained emotional rendering and independently controllable multi-character animation generation. Crucially, it eschews explicit geometric priors (e.g., 3DMMs or landmarks), thereby eliminating cross-driving artifacts and feature interference. To support training and evaluation, we introduce Multi-Expr, a novel multi-character expression dataset, and ExprBench, a dedicated benchmark for systematic assessment. Quantitative and qualitative experiments demonstrate that our approach significantly outperforms state-of-the-art methods across all metrics, especially in cross-subject reenactment and multi-character collaborative animation tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Producing expressive facial animations from static images is a challenging task. Prior methods relying on explicit geometric priors (e.g., facial landmarks or 3DMM) often suffer from artifacts in cross reenactment and struggle to capture subtle emotions. Furthermore, existing approaches lack support for multi-character animation, as driving features from different individuals frequently interfere with one another, complicating the task. To address these challenges, we propose FantasyPortrait, a diffusion transformer based framework capable of generating high-fidelity and emotion-rich animations for both single- and multi-character scenarios. Our method introduces an expression-augmented learning strategy that utilizes implicit representations to capture identity-agnostic facial dynamics, enhancing the model's ability to render fine-grained emotions. For multi-character control, we design a masked cross-attention mechanism that ensures independent yet coordinated expression generation, effectively preventing feature interference. To advance research in this area, we propose the Multi-Expr dataset and ExprBench, which are specifically designed datasets and benchmarks for training and evaluating multi-character portrait animations. Extensive experiments demonstrate that FantasyPortrait significantly outperforms state-of-the-art methods in both quantitative metrics and qualitative evaluations, excelling particularly in challenging cross reenactment and multi-character contexts. Our project page is https://fantasy-amap.github.io/fantasy-portrait/.
Problem

Research questions and friction points this paper is trying to address.

Generating expressive facial animations from static images
Overcoming artifacts in cross reenactment and subtle emotion capture
Enabling multi-character animation without feature interference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Expression-augmented diffusion transformers for animation
Masked cross-attention for multi-character control
Implicit representations for fine-grained emotions
๐Ÿ”Ž Similar Papers
No similar papers found.
Q
Qiang Wang
AMAP, Alibaba Group
M
Mengchao Wang
AMAP, Alibaba Group
F
Fan Jiang
AMAP, Alibaba Group
Y
Yaqi Fan
Beijing University of Posts and Telecommunications
Yonggang Qi
Yonggang Qi
Associate Professor, Beijing University of Posts and Telecommunications
computer visionsketch-based vision learning algorithms and applications
M
Mu Xu
AMAP, Alibaba Group