ViMix-14M: A Curated Multi-Source Video-Text Dataset with Long-Form, High-Quality Captions and Crawl-Free Access

📅 2025-11-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Open-source video-text generative models are hindered by the scarcity of high-quality, scalable training datasets: existing public resources largely rely on manual YouTube crawling, suffering from link rot, access restrictions, and licensing uncertainties—resulting in limited scale and inconsistent quality. To address this, we introduce VidText-14M, a multi-source, openly accessible dataset comprising 14 million high-fidelity video-text pairs, available for direct download without web scraping. We propose a real-annotation-guided, multi-granularity re-captioning pipeline that achieves fine-grained alignment across actions, scenes, and temporal structures. Integrated unified deduplication and multi-stage quality filtering ensure semantic consistency and fidelity. Extensive experiments demonstrate that VidText-14M significantly outperforms prior benchmarks on cross-modal retrieval, text-to-video generation, and video question answering—substantially advancing open-source video foundation model training and generalization capability.

Technology Category

Application Category

📝 Abstract
Text-to-video generation has surged in interest since Sora, yet open-source models still face a data bottleneck: there is no large, high-quality, easily obtainable video-text corpus. Existing public datasets typically require manual YouTube crawling, which yields low usable volume due to link rot and access limits, and raises licensing uncertainty. This work addresses this challenge by introducing ViMix-14M, a curated multi-source video-text dataset of around 14 million pairs that provides crawl-free, download-ready access and long-form, high-quality captions tightly aligned to video. ViMix-14M is built by merging diverse open video sources, followed by unified de-duplication and quality filtering, and a multi-granularity, ground-truth-guided re-captioning pipeline that refines descriptions to better match actions, scenes, and temporal structure. We evaluate the dataset by multimodal retrieval, text-to-video generation, and video question answering tasks, observing consistent improvements over counterpart datasets. We hope this work can help removing the key barrier to training and fine-tuning open-source video foundation models, and provide insights of building high-quality and generalizable video-text datasets.
Problem

Research questions and friction points this paper is trying to address.

Lack of large high-quality accessible video-text datasets
Existing datasets require manual crawling with licensing issues
Open-source video models face data bottleneck for training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Merging diverse open video sources
Applying unified de-duplication and quality filtering
Implementing multi-granularity ground-truth-guided re-captioning
🔎 Similar Papers
No similar papers found.