When Models Reason in Your Language: Controlling Thinking Trace Language Comes at the Cost of Accuracy

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a critical deficiency in large reasoning models (LRMs) under multilingual settings: their chain-of-thought (CoT) reasoning frequently reverts to English or produces fragmented, unreadable non-English outputs, severely impairing native-language users’ ability to supervise and verify reasoning. To address this, we introduce XReasoning—a novel multilingual benchmark that systematically quantifies the trade-off between language controllability and answer accuracy for the first time. We propose a prompt-engineering-based readability enhancement method, which improves CoT interpretability in target languages but incurs an average 12% accuracy drop. Further, we design a lightweight post-training approach requiring only 100 samples, effectively restoring accuracy while preserving target-language consistency. Our core contributions are: (1) formalizing the accuracy–readability trade-off as a fundamental paradigm for multilingual CoT; and (2) providing the first efficient, low-resource pathway to optimize language controllability in LRMs.

Technology Category

Application Category

📝 Abstract
Recent Large Reasoning Models (LRMs) with thinking traces have shown strong performance on English reasoning tasks. However, their ability to think in other languages is less studied. This capability is as important as answer accuracy for real world applications because users may find the reasoning trace useful for oversight only when it is expressed in their own language. We comprehensively evaluate two leading families of LRMs on our XReasoning benchmark and find that even the most advanced models often revert to English or produce fragmented reasoning in other languages, revealing a substantial gap in multilingual reasoning. Prompt based interventions that force models to reason in the users language improve readability and oversight but reduce answer accuracy, exposing an important trade off. We further show that targeted post training on just 100 examples mitigates this mismatch, though some accuracy loss remains. Our results highlight the limited multilingual reasoning capabilities of current LRMs and outline directions for future work. Code and data are available at https://github.com/Betswish/mCoT-XReasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multilingual reasoning gaps in Large Reasoning Models
Trade-off between reasoning language control and answer accuracy
Improving multilingual reasoning with targeted post-training interventions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt-based interventions enforce multilingual reasoning
Targeted post-training mitigates accuracy loss
Comprehensive evaluation reveals multilingual reasoning gaps
🔎 Similar Papers
No similar papers found.
Jirui Qi
Jirui Qi
University of Groningen
Natural Language Processing
S
Shan Chen
Harvard University, Mass General Brigham, Boston Children’s Hospital
Zidi Xiong
Zidi Xiong
Harvard University
Trustworthy machine learning
R
R. Fern'andez
University of Amsterdam
D
D. Bitterman
Harvard University, Mass General Brigham, Boston Children’s Hospital
Arianna Bisazza
Arianna Bisazza
Associate Professor, University of Groningen
Natural Language ProcessingMultilingual NLPInterpretabilityLanguage Learning in Humans vs Mach