A Comprehensive Evaluation of Multilingual Chain-of-Thought Reasoning: Performance, Consistency, and Faithfulness Across Languages

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates performance, consistency, and faithfulness disparities in multilingual chain-of-thought (CoT) reasoning across large language models (LLMs). Addressing the lack of cross-lingual CoT quality evaluation in prior work, we propose the first comprehensive multilingual CoT benchmarking framework: (1) a cross-lingual reasoning trace consistency verification method; (2) a faithfulness analysis paradigm based on controlled perturbations—including thought substitution, truncation, and erroneous token injection; and (3) explicit instruction following and prompt attack tests to assess linguistic compliance. Experiments uncover pronounced language bias: LLMs exhibit systematic variation across languages in CoT generation quality, frequency of CoT usage, and downstream reasoning efficacy. All code and datasets are publicly released, establishing a foundational benchmark and toolkit for trustworthy multilingual reasoning research.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs) increasingly rely on step-by-step Chain-of-Thought (CoT) reasoning to improve task performance, particularly in high-resource languages such as English. While recent work has examined final-answer accuracy in multilingual settings, the thinking traces themselves, i.e., the intermediate steps that lead to the final answer, remain underexplored. In this paper, we present the first comprehensive study of multilingual CoT reasoning, evaluating three key dimensions: performance, consistency, and faithfulness. We begin by measuring language compliance, answer accuracy, and answer consistency when LRMs are explicitly instructed or prompt-hacked to think in a target language, revealing strong language preferences and divergent performance across languages. Next, we assess crosslingual consistency of thinking traces by interchanging them between languages. We find that the quality and effectiveness of thinking traces vary substantially depending on the prompt language. Finally, we adapt perturbation-based techniques -- i.e., truncation and error injection -- to probe the faithfulness of thinking traces across languages, showing that models rely on traces to varying degrees. We release our code and data to support future research.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multilingual Chain-of-Thought reasoning performance across languages
Assessing crosslingual consistency of thinking traces in reasoning models
Probing faithfulness of reasoning traces through perturbation techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated multilingual Chain-of-Thought reasoning performance consistency
Assessed crosslingual thinking trace quality through interchange methods
Probed reasoning faithfulness using perturbation techniques across languages
🔎 Similar Papers
No similar papers found.