Multilingual Diversity Improves Vision-Language Representations

📅 2024-05-27
🏛️ Neural Information Processing Systems
📈 Citations: 10
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language pretraining heavily relies on English image-text pairs, overlooking cultural and conceptual diversity embedded in multilingual data, thereby limiting model generalization. To address this, we propose a multilingual enhancement paradigm: (1) constructing a multilingual image-text corpus via web crawling; (2) translating non-English samples using large-scale multilingual models (e.g., NLLB); and (3) applying cross-lingual re-filtering to improve the proportion of high-quality multilingual instances, followed by CLIP-style contrastive learning. Crucially, we provide the first systematic empirical validation that translation- and filtering-enhanced multilingual data significantly boosts performance on English vision tasks—and reveal their intrinsic complementarity with English data in the joint image-text embedding space. Experiments demonstrate consistent superiority over English-only baselines across 38 benchmarks—including ImageNet, distribution-shifted evaluation, image-text retrieval, and DataComp—and particularly pronounced gains in geographically diverse settings, with the largest improvements observed in African regions under the GeoDE evaluation protocol.

Technology Category

Application Category

📝 Abstract
Massive web-crawled image-text datasets lay the foundation for recent progress in multimodal learning. These datasets are designed with the goal of training a model to do well on standard computer vision benchmarks, many of which, however, have been shown to be English-centric (e.g., ImageNet). Consequently, existing data curation techniques gravitate towards using predominantly English image-text pairs and discard many potentially useful non-English samples. Our work questions this practice. Multilingual data is inherently enriching not only because it provides a gateway to learn about culturally salient concepts, but also because it depicts common concepts differently from monolingual data. We thus conduct a systematic study to explore the performance benefits of using more samples of non-English origins with respect to English vision tasks. By translating all multilingual image-text pairs from a raw web crawl to English and re-filtering them, we increase the prevalence of (translated) multilingual data in the resulting training set. Pre-training on this dataset outperforms using English-only or English-dominated datasets on ImageNet, ImageNet distribution shifts, image-English-text retrieval and on average across 38 tasks from the DataComp benchmark. On a geographically diverse task like GeoDE, we also observe improvements across all regions, with the biggest gain coming from Africa. In addition, we quantitatively show that English and non-English data are significantly different in both image and (translated) text space. We hope that our findings motivate future work to be more intentional about including multicultural and multilingual data, not just when non-English or geographically diverse tasks are involved, but to enhance model capabilities at large.
Problem

Research questions and friction points this paper is trying to address.

Investigating benefits of non-English data for vision tasks
Evaluating multilingual data impact on model performance
Addressing English-centric bias in multimodal datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging translated multilingual image-text pairs
Re-filtering web-crawled data to increase diversity
Enhancing model performance through multicultural data integration
🔎 Similar Papers
No similar papers found.