FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of jointly preserving subject identity and ensuring facial motion realism in inference-time text-to-video (T2V) generation without fine-tuning (IPT2V). We propose a layer-aware adaptive feature injection method that synergistically integrates 3D geometric guidance with multi-view 2D facial enhancement. Specifically, we embed 3D facial geometry priors and multi-view facial appearance features into a diffusion Transformer (DiT), enabling cross-layer adaptive injection of hierarchical facial representations while suppressing spurious dynamics. Crucially, our approach requires no subject-specific fine-tuning. To our knowledge, it is the first IPT2V framework unifying 3D structural guidance and multi-view 2D representation learning for identity-preserving video synthesis. Extensive experiments demonstrate significant improvements over prior IPT2V methods across multiple benchmarks, achieving state-of-the-art performance in both identity fidelity and facial motion naturalness.

Technology Category

Application Category

📝 Abstract
Tuning-free approaches adapting large-scale pre-trained video diffusion models for identity-preserving text-to-video generation (IPT2V) have gained popularity recently due to their efficacy and scalability. However, significant challenges remain to achieve satisfied facial dynamics while keeping the identity unchanged. In this work, we present a novel tuning-free IPT2V framework by enhancing face knowledge of the pre-trained video model built on diffusion transformers (DiT), dubbed FantasyID. Essentially, 3D facial geometry prior is incorporated to ensure plausible facial structures during video synthesis. To prevent the model from learning copy-paste shortcuts that simply replicate reference face across frames, a multi-view face augmentation strategy is devised to capture diverse 2D facial appearance features, hence increasing the dynamics over the facial expressions and head poses. Additionally, after blending the 2D and 3D features as guidance, instead of naively employing cross-attention to inject guidance cues into DiT layers, a learnable layer-aware adaptive mechanism is employed to selectively inject the fused features into each individual DiT layers, facilitating balanced modeling of identity preservation and motion dynamics. Experimental results validate our model's superiority over the current tuning-free IPT2V methods.
Problem

Research questions and friction points this paper is trying to address.

Enhance identity-preserving video generation
Improve facial dynamics in video synthesis
Integrate 2D and 3D facial features
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D facial geometry prior
multi-view face augmentation
layer-aware adaptive mechanism
🔎 Similar Papers
No similar papers found.
Y
Yunpeng Zhang
AMAP, Alibaba Group; Beijing University of Posts and Telecommunications
Q
Qiang Wang
AMAP, Alibaba Group
F
Fan Jiang
AMAP, Alibaba Group
Y
Yaqi Fan
Beijing University of Posts and Telecommunications
M
Mu Xu
AMAP, Alibaba Group
Yonggang Qi
Yonggang Qi
Associate Professor, Beijing University of Posts and Telecommunications
computer visionsketch-based vision learning algorithms and applications