AmharicStoryQA: A Multicultural Story Question Answering Benchmark in Amharic

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a critical limitation in current large language model (LLM) evaluations, which often conflate language with culture and overlook intra-lingual regional cultural variation, leading to inaccurate assessments of narrative comprehension. Focusing on Amharic—a low-resource language—the work introduces the first multi-cultural benchmark within a single language by curating a long-form story question-answering dataset spanning diverse regions of Ethiopia. Leveraging human-collected, region-specific narratives and carefully designed QA tasks, the study systematically evaluates LLMs’ ability to understand stories across distinct cultural contexts. Results reveal significant performance disparities across regions and uneven gains from supervised fine-tuning, highlighting a fundamental gap in models’ capacity for fine-grained cultural narrative understanding.

Technology Category

Application Category

📝 Abstract
With the growing emphasis on multilingual and cultural evaluation benchmarks for large language models, language and culture are often treated as synonymous, and performance is commonly used as a proxy for a models understanding of a given language. In this work, we argue that such evaluations overlook meaningful cultural variation that exists within a single language. We address this gap by focusing on narratives from different regions of Ethiopia and demonstrate that, despite shared linguistic characteristics, region-specific and domain-specific content substantially influences language evaluation outcomes. To this end, we introduce \textbf{\textit{AmharicStoryQA}}, a long-sequence story question answering benchmark grounded in culturally diverse narratives from Amharic-speaking regions. Using this benchmark, we reveal a significant narrative understanding gap in existing LLMs, highlight pronounced regional differences in evaluation results, and show that supervised fine-tuning yields uneven improvements across regions and evaluation settings. Our findings emphasize the need for culturally grounded benchmarks that go beyond language-level evaluation to more accurately assess and improve narrative understanding in low-resource languages.
Problem

Research questions and friction points this paper is trying to address.

multilingual evaluation
cultural variation
narrative understanding
low-resource languages
language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

culturally grounded benchmark
multilingual evaluation
narrative understanding
low-resource languages
regional variation
🔎 Similar Papers
No similar papers found.