DanQing: An Up-to-Date Large-Scale Chinese Vision-Language Pre-training Dataset

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the long-standing scarcity of high-quality, large-scale Chinese image–text pairs for vision–language pretraining. To this end, we construct a dataset comprising 100 million meticulously curated Chinese image–text pairs derived from Common Crawl web data collected between 2024 and 2025. Through an automated image–text alignment pipeline followed by multi-stage quality filtering, the dataset ensures strong semantic coherence and temporal relevance. As the first billion-scale Chinese cross-modal resource built from recent web content and subjected to rigorous filtering, it significantly enhances model capacity to capture emerging semantic trends. Models pretrained on this dataset consistently outperform existing baselines across multiple benchmarks, including Chinese zero-shot classification, cross-modal retrieval, and multimodal large language model evaluation.

Technology Category

Application Category

📝 Abstract
Vision-Language Pre-training (VLP) models have achieved remarkable success by leveraging large-scale image-text pairs. While English-centric models like CLIP and SigLIP benefit from massive datasets (e.g., LAION-400M), the development of Chinese VLP remains bottlenecked by the lack of high-quality, large-scale open-source data. In this paper, we present DanQing, a large-scale Chinese cross-modal dataset containing 100 million high-quality image-text pairs curated from Common Crawl. To ensure superior data quality, we develop an effective systematic pipeline comprising data source selection, text refinement, visual diversification, and cross-modal cross-batch filtering, thereby effectively mitigating the intrinsic noise prevalent in web data. Notably, DanQing incorporates data from 2024-2025, enabling models to capture contemporary semantic trends and emerging concepts. Extensive experiments via continued pretraining of SigLIP2 models demonstrate that DanQing consistently outperforms existing Chinese datasets across diverse downstream tasks, including zero-shot classification, cross-modal retrieval, and Chinese-centric large multimodal model tasks. Furthermore, in-depth analysis of DanQing reveals that it exhibits a more balanced semantic distribution and superior scaling capability compared to existing datasets. To facilitate further research in Chinese vision-language pre-training, we will open-source the DanQing dataset under the Creative Common CC-BY 4.0 license.
Problem

Research questions and friction points this paper is trying to address.

Chinese vision-language pre-training
image-text dataset
data scarcity
cross-modal learning
large-scale dataset
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chinese vision-language pretraining
large-scale dataset
data curation pipeline
temporally recent web data
contrastive pretraining
🔎 Similar Papers
No similar papers found.