🤖 AI Summary
Existing multilingual open-domain question answering (ODQA) benchmarks over-rely on English-centric evaluation, neglecting how regional and cultural differences affect question understanding and answer generation—leading to systematic assessment bias. To address this, we propose XLQA, the first multilingual ODQA benchmark explicitly designed for region-aware evaluation. XLQA covers eight languages, built from 3,000 English seed questions annotated by experts into region-dependent and region-independent categories. High-quality data is constructed via semantic consistency filtering, cross-lingual expansion, and expert validation, enabling a scalable framework for evaluating region sensitivity. Comprehensive evaluation across five state-of-the-art multilingual large language models reveals a significant performance drop on region-sensitive questions—uncovering a structural disconnect between linguistic competence and regional/cultural awareness. This work establishes a novel paradigm for fair, culturally adaptive evaluation in multilingual QA.
📝 Abstract
Large Language Models (LLMs) have shown significant progress in Open-domain question answering (ODQA), yet most evaluations focus on English and assume locale-invariant answers across languages. This assumption neglects the cultural and regional variations that affect question understanding and answer, leading to biased evaluation in multilingual benchmarks. To address these limitations, we introduce XLQA, a novel benchmark explicitly designed for locale-sensitive multilingual ODQA. XLQA contains 3,000 English seed questions expanded to eight languages, with careful filtering for semantic consistency and human-verified annotations distinguishing locale-invariant and locale-sensitive cases. Our evaluation of five state-of-the-art multilingual LLMs reveals notable failures on locale-sensitive questions, exposing gaps between English and other languages due to a lack of locale-grounding knowledge. We provide a systematic framework and scalable methodology for assessing multilingual QA under diverse cultural contexts, offering a critical resource to advance the real-world applicability of multilingual ODQA systems. Our findings suggest that disparities in training data distribution contribute to differences in both linguistic competence and locale-awareness across models.