EchoShot: Multi-Shot Portrait Video Generation

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video diffusion models are limited to single-shot generation, failing to meet practical demands for identity consistency and fine-grained controllability in multi-shot portrait videos. To address this, we propose the first native multi-shot video diffusion framework. Our method extends video diffusion Transformers with shot-aware positional embeddings to enable cross-shot vision–language alignment, and introduces joint text–attribute conditioning for precise control over facial attributes, clothing, and motion. We further construct PortraitGala, a high-fidelity multi-shot portrait video dataset, and integrate reference-image-driven personalization with arbitrarily long multi-shot video synthesis. Experiments demonstrate substantial improvements in cross-shot identity consistency and fine-grained controllability across semantic dimensions. This work establishes a new paradigm for multi-shot video modeling, advancing both architectural design and dataset curation for controllable, identity-preserving video generation.

Technology Category

Application Category

📝 Abstract
Video diffusion models substantially boost the productivity of artistic workflows with high-quality portrait video generative capacity. However, prevailing pipelines are primarily constrained to single-shot creation, while real-world applications urge for multiple shots with identity consistency and flexible content controllability. In this work, we propose EchoShot, a native and scalable multi-shot framework for portrait customization built upon a foundation video diffusion model. To start with, we propose shot-aware position embedding mechanisms within video diffusion transformer architecture to model inter-shot variations and establish intricate correspondence between multi-shot visual content and their textual descriptions. This simple yet effective design enables direct training on multi-shot video data without introducing additional computational overhead. To facilitate model training within multi-shot scenario, we construct PortraitGala, a large-scale and high-fidelity human-centric video dataset featuring cross-shot identity consistency and fine-grained captions such as facial attributes, outfits, and dynamic motions. To further enhance applicability, we extend EchoShot to perform reference image-based personalized multi-shot generation and long video synthesis with infinite shot counts. Extensive evaluations demonstrate that EchoShot achieves superior identity consistency as well as attribute-level controllability in multi-shot portrait video generation. Notably, the proposed framework demonstrates potential as a foundational paradigm for general multi-shot video modeling.
Problem

Research questions and friction points this paper is trying to address.

Generates multi-shot portrait videos with identity consistency
Enhances content controllability in video diffusion models
Addresses limitations of single-shot video generation pipelines
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shot-aware position embedding for multi-shot modeling
Large-scale dataset with identity consistency
Reference image-based personalized generation
🔎 Similar Papers
No similar papers found.
J
Jiahao Wang
Xi’an Jiaotong University
Hualian Sheng
Hualian Sheng
Zhejiang University
Computer Vision
Sijia Cai
Sijia Cai
Alibaba Cloud-Apsara Lab
Vision-Language Understanding & Generation
Weizhan Zhang
Weizhan Zhang
Professor,Department of Computer Science and Technology, Xi'an Jiaotong University
Multimedia networking
C
Caixia Yan
Xi’an Jiaotong University
Y
Yachuang Feng
Alibaba Cloud
B
Bing Deng
Alibaba Cloud
J
Jieping Ye
Alibaba Cloud