🤖 AI Summary
Existing video synthesis methods achieve notable progress in single-shot generation, yet multi-shot coherent generation and cinematic transition control remain challenging due to unstable shot boundaries and temporal inconsistency. To address these issues, we introduce Cine250K—the first fine-grained dataset specifically designed for multi-shot video synthesis—and uncover, via diffusion model analysis, a strong correlation between attention maps and shot boundaries. Leveraging this insight, we propose a training-free mask-guided mechanism enabling precise, user-controllable transitions at arbitrary temporal locations and cinematic style transfer. Our approach integrates attention-driven boundary analysis, mask-conditioned diffusion modeling, and dataset-specific fine-tuning. It achieves state-of-the-art performance in transition accuracy, temporal consistency, and visual quality. We further design dedicated evaluation metrics to rigorously validate effectiveness, demonstrating comprehensive superiority over prior methods.
📝 Abstract
Despite significant advances in video synthesis, research into multi-shot video generation remains in its infancy. Even with scaled-up models and massive datasets, the shot transition capabilities remain rudimentary and unstable, largely confining generated videos to single-shot sequences. In this work, we introduce CineTrans, a novel framework for generating coherent multi-shot videos with cinematic, film-style transitions. To facilitate insights into the film editing style, we construct a multi-shot video-text dataset Cine250K with detailed shot annotations. Furthermore, our analysis of existing video diffusion models uncovers a correspondence between attention maps in the diffusion model and shot boundaries, which we leverage to design a mask-based control mechanism that enables transitions at arbitrary positions and transfers effectively in a training-free setting. After fine-tuning on our dataset with the mask mechanism, CineTrans produces cinematic multi-shot sequences while adhering to the film editing style, avoiding unstable transitions or naive concatenations. Finally, we propose specialized evaluation metrics for transition control, temporal consistency and overall quality, and demonstrate through extensive experiments that CineTrans significantly outperforms existing baselines across all criteria.