Scaling Pre-training to One Hundred Billion Data for Vision Language Models

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the performance limits and inclusivity implications of scaling vision-language model pretraining data to tens of billions of image-text pairs. Method: We conduct large-scale contrastive learning, cross-lingual evaluation, and data distribution analysis, comparing models trained on full noisy web-scale corpora versus those pruned using CLIP-based quality filters. Contribution/Results: While performance plateaus on mainstream Western-centric benchmarks, substantial gains emerge in culturally diverse tasks, low-resource language understanding, and long-tail concept coverage. Critically, aggressive quality filtering—despite improving benchmark scores—systematically degrades cultural representational diversity. We provide the first empirical evidence that raw, noisy web data—though conventionally deemed “low-quality”—is indispensable for building truly inclusive multimodal systems. This reveals a fundamental misalignment between standard evaluation paradigms and inclusivity objectives, challenging prevailing data curation practices in vision-language pretraining.

Technology Category

Application Category

📝 Abstract
We provide an empirical investigation of the potential of pre-training vision-language models on an unprecedented scale: 100 billion examples. We find that model performance tends to saturate at this scale on many common Western-centric classification and retrieval benchmarks, such as COCO Captions. Nevertheless, tasks of cultural diversity achieve more substantial gains from the 100-billion scale web data, thanks to its coverage of long-tail concepts. Furthermore, we analyze the model's multilinguality and show gains in low-resource languages as well. In addition, we observe that reducing the size of the pretraining dataset via quality filters like using CLIP, typically used to enhance performance, may inadvertently reduce the cultural diversity represented even in large-scale datasets. Our results highlight that while traditional benchmarks may not benefit significantly from scaling noisy, raw web data to 100 billion examples, this data scale is vital for building truly inclusive multimodal systems.
Problem

Research questions and friction points this paper is trying to address.

Exploring vision-language model pre-training at 100 billion scale
Assessing cultural diversity and low-resource language gains
Analyzing dataset quality impact on cultural representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scaled pre-training to 100 billion data
Enhanced cultural diversity tasks
Improved low-resource language performance
🔎 Similar Papers
No similar papers found.
X
Xiao Wang
I
Ibrahim M. Alabdulmohsin
D
Daniel M. Salz
Z
Zhe Li
K
Keran Rong
Xiaohua Zhai
Xiaohua Zhai
Meta, OpenAI, Google DeepMind
Representation LearningVision and LanguageComputer Vision