Koala-36M: A Large-scale Video Dataset Improving Consistency between Fine-grained Conditions and Video Content

📅 2024-10-10
🏛️ arXiv.org
📈 Citations: 16
Influential: 1
📄 PDF
🤖 AI Summary
Existing video datasets suffer from coarse-grained temporal segmentation, unstructured text descriptions, and the absence of rigorous quality assessment—limiting fine-grained alignment in controllable video generation. To address these challenges, Koala-36M introduces a systematic solution: (1) constructing a large-scale dataset comprising 36 million high-quality video clips; (2) proposing the Video Training Suitability Score (VTSS), a novel multi-metric quality evaluation framework; (3) developing a high-accuracy transition detection method based on linear classification over probability distributions; and (4) generating structured, long-form textual descriptions averaging 200 words per clip, coupled with optimized fine-grained conditional embeddings. Experiments demonstrate substantial improvements in spatiotemporal text-video alignment accuracy and generation consistency. The codebase, dataset, and full preprocessing pipeline are publicly released.

Technology Category

Application Category

📝 Abstract
With the continuous progress of visual generation technologies, the scale of video datasets has grown exponentially. The quality of these datasets plays a pivotal role in the performance of video generation models. We assert that temporal splitting, detailed captions, and video quality filtering are three crucial determinants of dataset quality. However, existing datasets exhibit various limitations in these areas. To address these challenges, we introduce Koala-36M, a large-scale, high-quality video dataset featuring accurate temporal splitting, detailed captions, and superior video quality. The essence of our approach lies in improving the consistency between fine-grained conditions and video content. Specifically, we employ a linear classifier on probability distributions to enhance the accuracy of transition detection, ensuring better temporal consistency. We then provide structured captions for the splitted videos, with an average length of 200 words, to improve text-video alignment. Additionally, we develop a Video Training Suitability Score (VTSS) that integrates multiple sub-metrics, allowing us to filter high-quality videos from the original corpus. Finally, we incorporate several metrics into the training process of the generation model, further refining the fine-grained conditions. Our experiments demonstrate the effectiveness of our data processing pipeline and the quality of the proposed Koala-36M dataset. Our dataset and code have been released at https://koala36m.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Improving consistency between fine-grained conditions and video content
Enhancing temporal splitting and transition detection accuracy
Filtering high-quality videos using integrated sub-metrics (VTSS)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear classifier enhances transition detection accuracy
Structured captions improve text-video alignment
VTSS filters high-quality videos effectively
🔎 Similar Papers
No similar papers found.