🤖 AI Summary
This work addresses identity-preserving general video synthesis, targeting coherent identity continuity, facial editability, and spatiotemporal naturalness in single/multi-identity and multi-subject scenarios (e.g., virtual try-on, background-controllable generation). We propose a novel cross-video pairing strategy and a multi-stage training paradigm. Our architecture exclusively employs pure 3D self-attention—omitting CNNs, auxiliary conditioning modules, and explicit 3D reconstruction components. Image features are extracted via a variational autoencoder and jointly represented with video latent variables through temporal concatenation. On both single- and multi-identity video generation benchmarks, our method consistently surpasses state-of-the-art approaches. Notably, it is the first to seamlessly scale to multi-subject settings, establishing a new benchmark for identity-preserving video synthesis.
📝 Abstract
We present Concat-ID, a unified framework for identity-preserving video generation. Concat-ID employs Variational Autoencoders to extract image features, which are concatenated with video latents along the sequence dimension, leveraging solely 3D self-attention mechanisms without the need for additional modules. A novel cross-video pairing strategy and a multi-stage training regimen are introduced to balance identity consistency and facial editability while enhancing video naturalness. Extensive experiments demonstrate Concat-ID's superiority over existing methods in both single and multi-identity generation, as well as its seamless scalability to multi-subject scenarios, including virtual try-on and background-controllable generation. Concat-ID establishes a new benchmark for identity-preserving video synthesis, providing a versatile and scalable solution for a wide range of applications.