Global PIQA: Evaluating Physical Commonsense Reasoning Across 100+ Languages and Cultures

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing cross-cultural commonsense reasoning benchmarks are severely scarce, particularly lacking systematic evaluation of large language models’ (LLMs) culturally specific everyday physical commonsense. Method: We introduce the first multilingual commonsense reasoning benchmark covering 100+ languages, 14 language families, 23 writing systems, and cultures from 65 countries across five continents—curated manually by 335 native researchers, with over 50% non-parallel samples incorporating localized elements (e.g., regional foods, customs). We propose a crowdsourcing-driven cultural authenticity assurance mechanism and a non-parallel data partitioning strategy to enable zero-shot, fair cross-cultural evaluation. Contribution/Results: Experiments reveal strong overall performance by state-of-the-art models, yet accuracy drops by up to 37% on low-resource languages; open-source models significantly underperform closed-source counterparts, exposing critical gaps in multilingual commonsense assessment. The benchmark is publicly released.

Technology Category

Application Category

📝 Abstract
To date, there exist almost no culturally-specific evaluation benchmarks for large language models (LLMs) that cover a large number of languages and cultures. In this paper, we present Global PIQA, a participatory commonsense reasoning benchmark for over 100 languages, constructed by hand by 335 researchers from 65 countries around the world. The 116 language varieties in Global PIQA cover five continents, 14 language families, and 23 writing systems. In the non-parallel split of Global PIQA, over 50% of examples reference local foods, customs, traditions, or other culturally-specific elements. We find that state-of-the-art LLMs perform well on Global PIQA in aggregate, but they exhibit weaker performance in lower-resource languages (up to a 37% accuracy gap, despite random chance at 50%). Open models generally perform worse than proprietary models. Global PIQA highlights that in many languages and cultures, everyday knowledge remains an area for improvement, alongside more widely-discussed capabilities such as complex reasoning and expert knowledge. Beyond its uses for LLM evaluation, we hope that Global PIQA provides a glimpse into the wide diversity of cultures in which human language is embedded.
Problem

Research questions and friction points this paper is trying to address.

Addresses the lack of culturally-specific benchmarks for large language models
Evaluates physical commonsense reasoning across 100+ languages and cultures
Highlights performance gaps in lower-resource languages and cultural contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created participatory benchmark for 100+ languages
Manually constructed culturally-specific examples by researchers
Evaluated LLMs across diverse writing systems and traditions
🔎 Similar Papers
No similar papers found.