Consolidating and Developing Benchmarking Datasets for the Nepali Natural Language Understanding Tasks

📅 2024-11-28
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Nepali poses unique NLU challenges due to its Devanagari script, rich morphological inflection, and significant dialectal variation; yet the existing benchmark Nep-gLUE covers only four tasks, offering insufficient evaluation breadth. To address this, we introduce NLUE—the first comprehensive, systematically expanded Nepali NLU evaluation benchmark—comprising 12 tasks across single-sentence classification, semantic similarity, paraphrase identification, and natural language inference, built upon eight newly curated, high-quality datasets. NLUE incorporates Devanagari-specific data cleaning protocols and annotation guidelines, with rigorous human verification and multi-round inter-annotator agreement assessments to ensure cross-task and cross-dialect quality. Empirical evaluation reveals that state-of-the-art models achieve ≤65% accuracy on Nepali NLI and paraphrase tasks, exposing critical gaps in complex semantic understanding. NLUE establishes the most extensive, reproducible, and standardized evaluation framework for low-resource language NLP to date.

Technology Category

Application Category

📝 Abstract
The Nepali language has distinct linguistic features, especially its complex script (Devanagari script), morphology, and various dialects, which pose a unique challenge for natural language processing (NLP) evaluation. While the Nepali Language Understanding Evaluation (Nep-gLUE) benchmark provides a foundation for evaluating models, it remains limited in scope, covering four tasks. This restricts their utility for comprehensive assessments of NLP models. To address this limitation, we introduce eight new datasets, creating a new benchmark, the Nepali Language Understanding Evaluation (NLUE) benchmark, which covers a total of 12 tasks for evaluating the performance of models across a diverse set of Natural Language Understanding (NLU) tasks. The added tasks include single-sentence classification, similarity and paraphrase tasks, and Natural Language Inference (NLI) tasks. On evaluating the models using added tasks, we observe that the existing models fall short in handling complex NLU tasks effectively. This expanded benchmark sets a new standard for evaluating, comparing, and advancing models, contributing significantly to the broader goal of advancing NLP research for low-resource languages.
Problem

Research questions and friction points this paper is trying to address.

Expanding limited Nepali NLU benchmarks with new datasets
Addressing complex linguistic challenges in Nepali language processing
Evaluating model performance across diverse NLU tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduced twelve new datasets
Created Nepali Language Understanding Evaluation benchmark
Expanded tasks to Single-Sentence Classification and others
🔎 Similar Papers
No similar papers found.
J
Jinu Nyachhyon
Information and Language Processing Research Lab (ILPRL), Kathmandu University
M
Mridul Sharma
Information and Language Processing Research Lab (ILPRL), Kathmandu University
P
Prajwal Thapa
Information and Language Processing Research Lab (ILPRL), Kathmandu University
Bal Krishna Bal
Bal Krishna Bal
Professor of Computer Engineering, Kathmandu University
Natural Language ProcessingSentiment AnalysisSoftware Localization