UrduBench: An Urdu Reasoning Benchmark using Contextually Ensembled Translations with Human-in-the-Loop

๐Ÿ“… 2026-01-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the absence of standardized reasoning benchmarks for low-resource languages like Urdu and the inability of existing machine translation approaches to preserve contextual and structural integrity in reasoning tasks. The authors propose a context-integrated translation framework that combines outputs from multiple translation systems with human validation to construct UrduBenchโ€”the first high-quality Urdu reasoning benchmark spanning multiple difficulty levels and task types, including MGSM and MATH-500. Using this benchmark, they systematically evaluate various large language models under diverse prompting strategies, revealing significant performance degradation in multi-step and symbolic reasoning. Their findings underscore the critical role of linguistic consistency in enabling robust cross-lingual reasoning and establish a scalable paradigm for evaluating reasoning capabilities in low-resource languages.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent advances in large language models (LLMs) have led to strong reasoning capabilities; however, evaluating such models in low-resource languages remains challenging due to the lack of standardized benchmarks. In particular, Urdu reasoning evaluation has been limited by the sensitivity of machine translation and an emphasis on general language tasks rather than reasoning benchmarks. In this paper, we propose a contextually ensembled translation framework with human-in-the-loop validation that leverages multiple translation systems to develop Urdu reasoning benchmarks while preserving contextual and structural integrity. Using this framework, we translate widely adopted reasoning and question-answering benchmarks, including MGSM, MATH-500, CommonSenseQA, and OpenBookQA, into Urdu, collectively referred to as UrduBench, and conduct a comprehensive evaluation of both reasoning-oriented and instruction-tuned LLMs across multiple prompting strategies. Our analysis reveals performance differences across (1) four datasets, (2) five task difficulty levels, (3) diverse model architectures, (4) multiple model scaling settings, and (5) language consistency tests. We find that multi-step and symbolic reasoning tasks pose significant challenges in Urdu, and that stable language alignment is a critical prerequisite for robust reasoning. Overall, our work establishes a scalable methodology for standardized reasoning evaluation in Urdu and provides empirical insights into multilingual reasoning failures. This experimental setup is also broadly applicable to other low-resource languages. The code and datasets will be publicly released.
Problem

Research questions and friction points this paper is trying to address.

Urdu reasoning
low-resource languages
reasoning benchmark
machine translation sensitivity
standardized evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

contextually ensembled translation
human-in-the-loop validation
Urdu reasoning benchmark
low-resource languages
multilingual reasoning evaluation
๐Ÿ”Ž Similar Papers
No similar papers found.