CI-VID: A Coherent Interleaved Text-Video Dataset

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-video (T2V) datasets consist primarily of isolated text-video pairs, limiting the modeling of temporal structure in coherent, multi-shot videos. Method: We introduce TV2V, the first large-scale T2V dataset explicitly designed for coherent video generation—comprising 340K samples—with cross-shot transition descriptions and joint text-video annotations to enforce strong temporal consistency across scenes. We further propose a multidimensional evaluation benchmark integrating human assessment, vision-language model scoring, and cross-modal similarity metrics. Contribution/Results: Models trained on TV2V achieve significant improvements over baselines in generation accuracy, shot-transition smoothness, and narrative coherence. This work advances T2V from static, single-pair modeling toward sequence-aware, text-video co-driven content generation.

Technology Category

Application Category

📝 Abstract
Text-to-video (T2V) generation has recently attracted considerable attention, resulting in the development of numerous high-quality datasets that have propelled progress in this area. However, existing public datasets are primarily composed of isolated text-video (T-V) pairs and thus fail to support the modeling of coherent multi-clip video sequences. To address this limitation, we introduce CI-VID, a dataset that moves beyond isolated text-to-video (T2V) generation toward text-and-video-to-video (TV2V) generation, enabling models to produce coherent, multi-scene video sequences. CI-VID contains over 340,000 samples, each featuring a coherent sequence of video clips with text captions that capture both the individual content of each clip and the transitions between them, enabling visually and textually grounded generation. To further validate the effectiveness of CI-VID, we design a comprehensive, multi-dimensional benchmark incorporating human evaluation, VLM-based assessment, and similarity-based metrics. Experimental results demonstrate that models trained on CI-VID exhibit significant improvements in both accuracy and content consistency when generating video sequences. This facilitates the creation of story-driven content with smooth visual transitions and strong temporal coherence, underscoring the quality and practical utility of the CI-VID dataset We release the CI-VID dataset and the accompanying code for data construction and evaluation at: https://github.com/ymju-BAAI/CI-VID
Problem

Research questions and friction points this paper is trying to address.

Existing datasets lack coherent multi-clip video sequences
CI-VID enables coherent multi-scene video generation
Models need improved accuracy and content consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces CI-VID for coherent multi-scene video generation
Includes 340,000 samples with text-video transitions
Multi-dimensional benchmark for accuracy and consistency
🔎 Similar Papers
No similar papers found.
Yiming Ju
Yiming Ju
Beijing Academy of Artificial Intelligence
nlp ai llm
J
Jijin Hu
Beijing University of Posts and Telecommunications
Zhengxiong Luo
Zhengxiong Luo
Bytedance Seed
Super-ResolutionHuman Pose EstimationMultimodal Generation
Haoge Deng
Haoge Deng
Institute of Automation, Chinese Academy of Sciences & Beijing Academy of Artificial Intelligence
Computer Vision
H
hanyu Zhao
Beijing Academy of Artificial Intelligence
L
Li Du
Beijing Academy of Artificial Intelligence
Chengwei Wu
Chengwei Wu
Harbin Institute of Technology
Fuzzy controladaptive controlnetworked control systems
D
Donglin Hao
Beijing Academy of Artificial Intelligence
X
Xinlong Wang
Beijing Academy of Artificial Intelligence
T
Tengfei Pan
Beijing Academy of Artificial Intelligence