🤖 AI Summary
Existing large language models (LLMs) lack rigorous evaluation on cross-lingual, culture-specific long-form question answering (LFQA), particularly for low-resource languages. Method: We introduce CulturaLFQA—the first multilingual, culture-specific LFQA benchmark covering 23 languages (including Fijian and Kirundu) with 1.5K high-quality questions—and formally define the culture-specific LFQA evaluation paradigm. We further propose a multidimensional evaluation framework integrating automated detection (language identification, repetition token analysis) with cross-lingual human assessment. Results: Experiments reveal substantial performance degradation of state-of-the-art LLMs on low-resource languages and culture-specific questions, exposing critical limitations in cross-cultural reasoning. This work fills a key gap in non-English LFQA evaluation and provides both a foundational benchmark and methodological foundation for developing culturally adaptive LLMs.
📝 Abstract
Large language models (LLMs) are used for long-form question answering (LFQA), which requires them to generate paragraph-length answers to complex questions. While LFQA has been well-studied in English, this research has not been extended to other languages. To bridge this gap, we introduce CaLMQA, a collection of 1.5K complex culturally specific questions spanning 23 languages and 51 culturally agnostic questions translated from English into 22 other languages. We define culturally specific questions as those uniquely or more likely to be asked by people from cultures associated with the question's language. We collect naturally-occurring questions from community web forums and hire native speakers to write questions to cover under-resourced, rarely-studied languages such as Fijian and Kirundi. Our dataset contains diverse, complex questions that reflect cultural topics (e.g. traditions, laws, news) and the language usage of native speakers. We automatically evaluate a suite of open- and closed-source models on CaLMQA by detecting incorrect language and token repetitions in answers, and observe that the quality of LLM-generated answers degrades significantly for some low-resource languages. Lastly, we perform human evaluation on a subset of models and languages. Manual evaluation reveals that model performance is significantly worse for culturally specific questions than for culturally agnostic questions. Our findings highlight the need for further research in non-English LFQA and provide an evaluation framework.