Identity-Preserving Text-to-Video Generation by Frequency Decomposition

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 18
Influential: 5
📄 PDF
🤖 AI Summary
Poor identity consistency and reliance on instance-specific fine-tuning plague text-to-video (T2V) generation. To address this, we propose ConsID: a fine-tuning-free, frequency-aware controllable DiT architecture. Its core innovation lies in decoupling facial features into low-frequency components—capturing global structure via keypoint and reference-image encodings—and high-frequency components—encoding local details—and injecting them separately into shallow and deep DiT blocks for hierarchical frequency-guided identity control. This marks the first integration of frequency-domain decomposition into identity-preserving T2V modeling, overcoming the fine-tuning bottleneck inherent in prior methods. Evaluated on multiple benchmarks, ConsID achieves significant improvements in identity similarity (ID-Sim ↑18.7%) and video quality (FVD ↓32.5%), enabling high-fidelity, long-duration identity-stable controllable video generation.

Technology Category

Application Category

📝 Abstract
Identity-preserving text-to-video (IPT2V) generation aims to create high-fidelity videos with consistent human identity. It is an important task in video generation but remains an open problem for generative models. This paper pushes the technical frontier of IPT2V in two directions that have not been resolved in literature: (1) A tuning-free pipeline without tedious case-by-case finetuning, and (2) A frequency-aware heuristic identity-preserving DiT-based control scheme. We propose ConsisID, a tuning-free DiT-based controllable IPT2V model to keep human identity consistent in the generated video. Inspired by prior findings in frequency analysis of diffusion transformers, it employs identity-control signals in the frequency domain, where facial features can be decomposed into low-frequency global features and high-frequency intrinsic features. First, from a low-frequency perspective, we introduce a global facial extractor, which encodes reference images and facial key points into a latent space, generating features enriched with low-frequency information. These features are then integrated into shallow layers of the network to alleviate training challenges associated with DiT. Second, from a high-frequency perspective, we design a local facial extractor to capture high-frequency details and inject them into transformer blocks, enhancing the model's ability to preserve fine-grained features. We propose a hierarchical training strategy to leverage frequency information for identity preservation, transforming a vanilla pre-trained video generation model into an IPT2V model. Extensive experiments demonstrate that our frequency-aware heuristic scheme provides an optimal control solution for DiT-based models. Thanks to this scheme, our ConsisID generates high-quality, identity-preserving videos, making strides towards more effective IPT2V.
Problem

Research questions and friction points this paper is trying to address.

Achieves identity-consistent video generation without fine-tuning
Uses frequency decomposition to preserve facial features
Enhances DiT models with hierarchical frequency-aware controls
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tuning-free DiT-based controllable IPT2V model
Frequency-aware heuristic identity-preserving control scheme
Hierarchical training leveraging low and high-frequency features