🤖 AI Summary
Low-resource languages like Urdu lack fine-grained syntactic evaluation benchmarks, hindering rigorous assessment of linguistic competence in multilingual language models.
Method: We introduce UrBLiMP—the first Urdu benchmark based on minimal pair sentences—covering 10 core syntactic phenomena with 5,696 high-quality sentence pairs. Constructed from the Urdu Treebank and diverse corpora, annotations were performed manually and validated for inter-annotator agreement (96.10%), ensuring linguistic coverage and annotation reliability.
Contribution/Results: Comprehensive evaluation across 20 state-of-the-art multilingual LMs reveals that LLaMA-3-70B achieves the highest accuracy (94.73%), yet performance differences among top models are statistically marginal. This indicates a shared limitation in modeling fine-grained syntax for low-resource languages. UrBLiMP establishes a reproducible, extensible, and standardized framework for syntactic evaluation in under-resourced linguistic settings.
📝 Abstract
Multilingual Large Language Models (LLMs) have shown remarkable performance across various languages; however, they often include significantly less data for low-resource languages such as Urdu compared to high-resource languages like English. To assess the linguistic knowledge of LLMs in Urdu, we present the Urdu Benchmark of Linguistic Minimal Pairs (UrBLiMP) i.e. pairs of minimally different sentences that contrast in grammatical acceptability. UrBLiMP comprises 5,696 minimal pairs targeting ten core syntactic phenomena, carefully curated using the Urdu Treebank and diverse Urdu text corpora. A human evaluation of UrBLiMP annotations yielded a 96.10% inter-annotator agreement, confirming the reliability of the dataset. We evaluate twenty multilingual LLMs on UrBLiMP, revealing significant variation in performance across linguistic phenomena. While LLaMA-3-70B achieves the highest average accuracy (94.73%), its performance is statistically comparable to other top models such as Gemma-3-27B-PT. These findings highlight both the potential and the limitations of current multilingual LLMs in capturing fine-grained syntactic knowledge in low-resource languages.