Demystifying CLIP Data

📅 2023-09-28
🏛️ International Conference on Learning Representations
📈 Citations: 77
Influential: 9
📄 PDF
🤖 AI Summary
This study addresses the lack of transparency regarding CLIP’s training data provenance, opaque filtering mechanisms, and data biases that impair model performance. We present the first systematic deconstruction and reproducible reconstruction of CLIP’s data curation pipeline. To this end, we propose MetaCLIP—a metadata-driven vision-language pretraining paradigm—leveraging the raw CommonCrawl image-text corpus and fine-grained, CLIP-derived conceptual metadata to enable distribution-aligned, balanced sampling. MetaCLIP ensures reproducibility, interpretability, and full open-source compliance. In zero-shot ImageNet evaluation, MetaCLIP achieves 70.8% top-1 accuracy with ViT-B (+2.5% over CLIP), improves to 72.4% at the 1B-scale variant, and reaches 80.5% with ViT-H. All source code and metadata distributions are publicly released.
📝 Abstract
Contrastive Language-Image Pre-training (CLIP) is an approach that has advanced research and applications in computer vision, fueling modern recognition systems and generative models. We believe that the main ingredient to the success of CLIP is its data and not the model architecture or pre-training objective. However, CLIP only provides very limited information about its data and how it has been collected, leading to works that aim to reproduce CLIP's data by filtering with its model parameters. In this work, we intend to reveal CLIP's data curation approach and in our pursuit of making it open to the community introduce Metadata-Curated Language-Image Pre-training (MetaCLIP). MetaCLIP takes a raw data pool and metadata (derived from CLIP's concepts) and yields a balanced subset over the metadata distribution. Our experimental study rigorously isolates the model and training settings, concentrating solely on data. MetaCLIP applied to CommonCrawl with 400M image-text data pairs outperforms CLIP's data on multiple standard benchmarks. In zero-shot ImageNet classification, MetaCLIP achieves 70.8% accuracy, surpassing CLIP's 68.3% on ViT-B models. Scaling to 1B data, while maintaining the same training budget, attains 72.4%. Our observations hold across various model sizes, exemplified by ViT-H achieving 80.5%, without any bells-and-whistles. Curation code and training data distribution on metadata is made available at https://github.com/facebookresearch/MetaCLIP.
Problem

Research questions and friction points this paper is trying to address.

CLIP Data Characteristics
Data Source Analysis
Model Performance Impact
Innovation

Methods, ideas, or system contributions that make the work stand out.

MetaCLIP
Optimal Data Subset Selection
CLIP Data Optimization
🔎 Similar Papers
No similar papers found.