BanglaMATH : A Bangla benchmark dataset for testing LLM mathematical reasoning at grades 6, 7, and 8

๐Ÿ“… 2025-10-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the lack of mathematical reasoning evaluation benchmarks for low-resource languages, this paper introduces BanglaMathโ€”the first grade-aligned (Grades 6โ€“8) mathematical reasoning benchmark for Bengali, comprising 1.7K word problems spanning arithmetic, algebra, geometry, and logical reasoning. Methodologically, it innovatively incorporates distractor-enhanced problem generation and cross-lingual translation robustness testing, while ensuring high data quality via human annotation, multi-step reasoning labeling, and back-translation validation. Experimental results reveal that only Gemini 2.5 Flash and DeepSeek V3 achieve >80% accuracy across all three grade levels; however, their performance degrades substantially under translation and distractor perturbations. This work is the first to systematically expose semantic fragility and cross-lingual generalization bottlenecks of large language models in low-resource-language mathematical reasoning, establishing a novel evaluation paradigm and foundational benchmark for multilingual mathematical AI.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) have tremendous potential to play a key role in supporting mathematical reasoning, with growing use in education and AI research. However, most existing benchmarks are limited to English, creating a significant gap for low-resource languages. For example, Bangla is spoken by nearly 250 million people who would collectively benefit from LLMs capable of native fluency. To address this, we present BanglaMATH, a dataset of 1.7k Bangla math word problems across topics such as Arithmetic, Algebra, Geometry, and Logical Reasoning, sourced from Bangla elementary school workbooks and annotated with details like grade level and number of reasoning steps. We have designed BanglaMATH to evaluate the mathematical capabilities of both commercial and open-source LLMs in Bangla, and we find that Gemini 2.5 Flash and DeepSeek V3 are the only models to achieve strong performance, with $ge$ 80% accuracy across three elementary school grades. Furthermore, we assess the robustness and language bias of these top-performing LLMs by augmenting the original problems with distracting information, and translating the problems into English. We show that both LLMs fail to maintain robustness and exhibit significant performance bias in Bangla. Our study underlines current limitations of LLMs in handling arithmetic and mathematical reasoning in low-resource languages, and highlights the need for further research on multilingual and equitable mathematical understanding. Dataset link: href{https://github.com/TabiaTanzin/BanglaMATH-A-Bangla-benchmark-dataset-for-testing-LLM-mathematical-reasoning-at-grades-6-7-and-8.git}{https://github.com/BanglaMATH}
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM mathematical reasoning in Bangla for elementary grades
Addressing performance bias in low-resource language mathematical benchmarks
Assessing robustness of LLMs with distracting information in Bangla
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created BanglaMATH dataset with 1.7k math problems
Evaluated LLM performance on Bangla mathematical reasoning
Tested robustness via distracting information and translations
๐Ÿ”Ž Similar Papers
No similar papers found.
Tabia Tanzin Prama
Tabia Tanzin Prama
Phd Student of Computer Science
Data MiningNLPHealth InformaticsAI Ethics
C
Christopher M. Danforth
ComputationalStoryLab, VermontComplexSystemsInstitute, VermontAdvancedComputingCenter, DepartmentofMathematicsandStatistics, UniversityofVermont,Burlington,VT05405,USA
Peter Sheridan Dodds
Peter Sheridan Dodds
Professor/Director, Computational Story Lab, Vermont Complex Systems Institute, UVM
LanguageMeaningStoriesSociotechnical PhenomenaComplex Systems