🤖 AI Summary
This study reveals a severe reasoning-conclusion misalignment problem in multilingual large language models (LLMs) for non-Latin-script languages—misalignment rates exceeding those for Latin-script languages by over 2×—indicating that conventional evaluation metrics substantially overestimate their true reasoning capabilities. To address this, we propose the first human-validated, cross-lingual reasoning alignment evaluation framework. It comprises: (i) a human-annotated taxonomy of reasoning errors (primarily evidence omission and logical breaks); (ii) the GlobalMMLU benchmark; (iii) 65K manually verified reasoning chains; (iv) cross-lingual consistency scoring; and (v) fine-grained error annotation. Empirical evaluation across six languages and six state-of-the-art LLMs demonstrates a significant decoupling between reasoning correctness and task accuracy. Crucially, reasoning-conclusion alignment rates for non-Latin-script languages range only from 31% to 47%, markedly below the 68–79% observed for Latin-script languages.
📝 Abstract
Large language models demonstrate strong reasoning capabilities through chain-of-thought prompting, but whether this reasoning quality transfers across languages remains underexplored. We introduce a human-validated framework to evaluate whether model-generated reasoning traces logically support their conclusions across languages. Analyzing 65k reasoning traces from GlobalMMLU questions across 6 languages and 6 frontier models, we uncover a critical blind spot: while models achieve high task accuracy, their reasoning can fail to support their conclusions. Reasoning traces in non-Latin scripts show at least twice as much misalignment between their reasoning and conclusions than those in Latin scripts. We develop an error taxonomy through human annotation to characterize these failures, finding they stem primarily from evidential errors (unsupported claims, ambiguous facts) followed by illogical reasoning steps. Our findings demonstrate that current multilingual evaluation practices provide an incomplete picture of model reasoning capabilities and highlight the need for reasoning-aware evaluation frameworks.