Where Do Images Come From? Analyzing Captions to Geographically Profile Datasets

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the pronounced geographic bias in training data for text-to-image models, which limits the global diversity of generated content. For the first time, it systematically leverages large language models (LLMs) to extract geographic locations from multilingual image captions, enabling geospatial provenance and statistical analysis of large-scale multimodal datasets. The findings reveal that samples from the United States, United Kingdom, and Canada collectively account for 48% of the data, while South America and Africa represent only 1.8% and 3.8%, respectively. A strong positive correlation exists between a country’s GDP and its data representation (ρ = 0.82). Notably, highly represented regions do not exhibit greater visual or semantic diversity, and Stable Diffusion’s generated images show substantially narrower geographic coverage than real-world distributions. This work uncovers the structural roots of data bias and provides a foundation for developing more inclusive generative models.

Technology Category

Application Category

📝 Abstract
Recent studies show that text-to-image models often fail to generate geographically representative images, raising concerns about the representativeness of their training data and motivating the question: which parts of the world do these training examples come from? We geographically profile large-scale multimodal datasets by mapping image-caption pairs to countries based on location information extracted from captions using LLMs. Studying English captions from three widely used datasets (Re-LAION, DataComp1B, and Conceptual Captions) across $20$ common entities (e.g., house, flag), we find that the United States, the United Kingdom, and Canada account for $48.0\%$ of samples, while South American and African countries are severely under-represented with only $1.8\%$ and $3.8\%$ of images, respectively. We observe a strong correlation between a country's GDP and its representation in the data ($\rho = 0.82$). Examining non-English subsets for $4$ languages from the Re-LAION dataset, we find that representation skews heavily toward countries where these languages are predominantly spoken. Additionally, we find that higher representation does not necessarily translate to greater visual or semantic diversity. Finally, analyzing country-specific images generated by Stable Diffusion v1.3 trained on Re-LAION, we show that while generations appear realistic, they are severely limited in their coverage compared to real-world images.
Problem

Research questions and friction points this paper is trying to address.

geographic bias
dataset representativeness
text-to-image models
multimodal datasets
data diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

geographic profiling
multimodal datasets
caption analysis
representation bias
text-to-image generation
🔎 Similar Papers
No similar papers found.