🤖 AI Summary
Existing text-to-video (T2V) datasets consist primarily of isolated text-video pairs, limiting the modeling of temporal structure in coherent, multi-shot videos. Method: We introduce TV2V, the first large-scale T2V dataset explicitly designed for coherent video generation—comprising 340K samples—with cross-shot transition descriptions and joint text-video annotations to enforce strong temporal consistency across scenes. We further propose a multidimensional evaluation benchmark integrating human assessment, vision-language model scoring, and cross-modal similarity metrics. Contribution/Results: Models trained on TV2V achieve significant improvements over baselines in generation accuracy, shot-transition smoothness, and narrative coherence. This work advances T2V from static, single-pair modeling toward sequence-aware, text-video co-driven content generation.
📝 Abstract
Text-to-video (T2V) generation has recently attracted considerable attention, resulting in the development of numerous high-quality datasets that have propelled progress in this area. However, existing public datasets are primarily composed of isolated text-video (T-V) pairs and thus fail to support the modeling of coherent multi-clip video sequences. To address this limitation, we introduce CI-VID, a dataset that moves beyond isolated text-to-video (T2V) generation toward text-and-video-to-video (TV2V) generation, enabling models to produce coherent, multi-scene video sequences. CI-VID contains over 340,000 samples, each featuring a coherent sequence of video clips with text captions that capture both the individual content of each clip and the transitions between them, enabling visually and textually grounded generation. To further validate the effectiveness of CI-VID, we design a comprehensive, multi-dimensional benchmark incorporating human evaluation, VLM-based assessment, and similarity-based metrics. Experimental results demonstrate that models trained on CI-VID exhibit significant improvements in both accuracy and content consistency when generating video sequences. This facilitates the creation of story-driven content with smooth visual transitions and strong temporal coherence, underscoring the quality and practical utility of the CI-VID dataset We release the CI-VID dataset and the accompanying code for data construction and evaluation at: https://github.com/ymju-BAAI/CI-VID