🤖 AI Summary
Existing multilingual mathematical benchmarks (e.g., MGSM) suffer from translation errors and inconsistent answer extraction, leading to distorted LLM cross-lingual performance evaluation and artificially inflated disparities between high- and low-resource languages.
Method: We propose a scalable, automated quality assurance framework integrating back-translation validation and standardized answer parsing to systematically correct the MGSM dataset and unify evaluation protocols.
Contribution/Results: Experiments show that, after correction, performance disparities across languages vanish for mainstream LLMs—challenging the prevailing “language capability gap” narrative. This work exposes severe evaluation contamination in multilingual research and introduces the first open-source, reproducible, high-quality multilingual mathematical benchmark. It establishes a methodological foundation for trustworthy cross-lingual capability analysis, enabling more accurate and equitable assessment of LLMs’ mathematical reasoning across languages.
📝 Abstract
Most current large language models (LLMs) support a wide variety of languages in addition to English, including high-resource languages (e.g. German, Chinese, French), as well as low-resource ones (e.g. Swahili, Telugu). In addition they have also shown impressive capabilities in different domains, like coding, science and math. In this short paper, taking math as an example domain, we study the performance of different LLMs across languages. Experimental results show that there exists a non-negligible and consistent gap in the performance of the models across languages. Interestingly, and somewhat against expectations, the gap exists for both high- and low-resource languages. We hope that these results influence further research into cross-lingual capability generalization for next generation LLMs. If it weren't for the fact that they are false! By analyzing one of the standard multilingual math benchmarks (MGSM), we determine that several translation errors are present in the data. Furthermore, the lack of standardized answer extraction from LLM outputs further influences the final results. We propose a method for automatic quality assurance to address the first issue at scale, and give recommendations to address the second one. Combining these two approaches we show that the aforementioned language gap mostly disappears, leading to completely different conclusions from our research. We additionally release the corrected dataset to the community.