Concat-ID: Towards Universal Identity-Preserving Video Synthesis

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses identity-preserving general video synthesis, targeting coherent identity continuity, facial editability, and spatiotemporal naturalness in single/multi-identity and multi-subject scenarios (e.g., virtual try-on, background-controllable generation). We propose a novel cross-video pairing strategy and a multi-stage training paradigm. Our architecture exclusively employs pure 3D self-attention—omitting CNNs, auxiliary conditioning modules, and explicit 3D reconstruction components. Image features are extracted via a variational autoencoder and jointly represented with video latent variables through temporal concatenation. On both single- and multi-identity video generation benchmarks, our method consistently surpasses state-of-the-art approaches. Notably, it is the first to seamlessly scale to multi-subject settings, establishing a new benchmark for identity-preserving video synthesis.

Technology Category

Application Category

📝 Abstract
We present Concat-ID, a unified framework for identity-preserving video generation. Concat-ID employs Variational Autoencoders to extract image features, which are concatenated with video latents along the sequence dimension, leveraging solely 3D self-attention mechanisms without the need for additional modules. A novel cross-video pairing strategy and a multi-stage training regimen are introduced to balance identity consistency and facial editability while enhancing video naturalness. Extensive experiments demonstrate Concat-ID's superiority over existing methods in both single and multi-identity generation, as well as its seamless scalability to multi-subject scenarios, including virtual try-on and background-controllable generation. Concat-ID establishes a new benchmark for identity-preserving video synthesis, providing a versatile and scalable solution for a wide range of applications.
Problem

Research questions and friction points this paper is trying to address.

Develops a framework for identity-preserving video generation.
Balances identity consistency and facial editability in videos.
Enhances video naturalness and scalability for multi-subject scenarios.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Variational Autoencoders for feature extraction
Employs 3D self-attention without extra modules
Introduces cross-video pairing for identity consistency
🔎 Similar Papers
No similar papers found.