🤖 AI Summary
Nepali poses unique NLU challenges due to its Devanagari script, rich morphological inflection, and significant dialectal variation; yet the existing benchmark Nep-gLUE covers only four tasks, offering insufficient evaluation breadth. To address this, we introduce NLUE—the first comprehensive, systematically expanded Nepali NLU evaluation benchmark—comprising 12 tasks across single-sentence classification, semantic similarity, paraphrase identification, and natural language inference, built upon eight newly curated, high-quality datasets. NLUE incorporates Devanagari-specific data cleaning protocols and annotation guidelines, with rigorous human verification and multi-round inter-annotator agreement assessments to ensure cross-task and cross-dialect quality. Empirical evaluation reveals that state-of-the-art models achieve ≤65% accuracy on Nepali NLI and paraphrase tasks, exposing critical gaps in complex semantic understanding. NLUE establishes the most extensive, reproducible, and standardized evaluation framework for low-resource language NLP to date.
📝 Abstract
The Nepali language has distinct linguistic features, especially its complex script (Devanagari script), morphology, and various dialects, which pose a unique challenge for natural language processing (NLP) evaluation. While the Nepali Language Understanding Evaluation (Nep-gLUE) benchmark provides a foundation for evaluating models, it remains limited in scope, covering four tasks. This restricts their utility for comprehensive assessments of NLP models. To address this limitation, we introduce eight new datasets, creating a new benchmark, the Nepali Language Understanding Evaluation (NLUE) benchmark, which covers a total of 12 tasks for evaluating the performance of models across a diverse set of Natural Language Understanding (NLU) tasks. The added tasks include single-sentence classification, similarity and paraphrase tasks, and Natural Language Inference (NLI) tasks. On evaluating the models using added tasks, we observe that the existing models fall short in handling complex NLU tasks effectively. This expanded benchmark sets a new standard for evaluating, comparing, and advancing models, contributing significantly to the broader goal of advancing NLP research for low-resource languages.