🤖 AI Summary
A severe scarcity of evaluation resources for large language models (LLMs) exists in non-English, non-Western cultural contexts. Method: This study introduces the first systematic evaluation framework tailored to Persian language and Iranian culture, comprising 19 high-quality, domain-specific datasets—covering Iranian law, Persian grammar, idioms, university entrance examinations, and more—curated via rigorous human annotation and culturally aligned validation protocols. We conduct comprehensive benchmarking across 41 mainstream LLMs. Contribution/Results: We release PersianBench, the first large-scale, culturally grounded Persian-language evaluation benchmark. This work establishes foundational standards for cross-cultural LLM assessment, significantly advancing reliability and cultural fidelity evaluation for non-Western language models.
📝 Abstract
As large language models (LLMs) become increasingly embedded in our daily lives, evaluating their quality and reliability across diverse contexts has become essential. While comprehensive benchmarks exist for assessing LLM performance in English, there remains a significant gap in evaluation resources for other languages. Moreover, because most LLMs are trained primarily on data rooted in European and American cultures, they often lack familiarity with non-Western cultural contexts. To address this limitation, our study focuses on the Persian language and Iranian culture. We introduce 19 new evaluation datasets specifically designed to assess LLMs on topics such as Iranian law, Persian grammar, Persian idioms, and university entrance exams. Using these datasets, we benchmarked 41 prominent LLMs, aiming to bridge the existing cultural and linguistic evaluation gap in the field.