🤖 AI Summary
Current video diffusion models are limited to single-shot generation, failing to meet practical demands for identity consistency and fine-grained controllability in multi-shot portrait videos. To address this, we propose the first native multi-shot video diffusion framework. Our method extends video diffusion Transformers with shot-aware positional embeddings to enable cross-shot vision–language alignment, and introduces joint text–attribute conditioning for precise control over facial attributes, clothing, and motion. We further construct PortraitGala, a high-fidelity multi-shot portrait video dataset, and integrate reference-image-driven personalization with arbitrarily long multi-shot video synthesis. Experiments demonstrate substantial improvements in cross-shot identity consistency and fine-grained controllability across semantic dimensions. This work establishes a new paradigm for multi-shot video modeling, advancing both architectural design and dataset curation for controllable, identity-preserving video generation.
📝 Abstract
Video diffusion models substantially boost the productivity of artistic workflows with high-quality portrait video generative capacity. However, prevailing pipelines are primarily constrained to single-shot creation, while real-world applications urge for multiple shots with identity consistency and flexible content controllability. In this work, we propose EchoShot, a native and scalable multi-shot framework for portrait customization built upon a foundation video diffusion model. To start with, we propose shot-aware position embedding mechanisms within video diffusion transformer architecture to model inter-shot variations and establish intricate correspondence between multi-shot visual content and their textual descriptions. This simple yet effective design enables direct training on multi-shot video data without introducing additional computational overhead. To facilitate model training within multi-shot scenario, we construct PortraitGala, a large-scale and high-fidelity human-centric video dataset featuring cross-shot identity consistency and fine-grained captions such as facial attributes, outfits, and dynamic motions. To further enhance applicability, we extend EchoShot to perform reference image-based personalized multi-shot generation and long video synthesis with infinite shot counts. Extensive evaluations demonstrate that EchoShot achieves superior identity consistency as well as attribute-level controllability in multi-shot portrait video generation. Notably, the proposed framework demonstrates potential as a foundational paradigm for general multi-shot video modeling.