Learn Globally, Speak Locally: Bridging the Gaps in Multilingual Reasoning

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit pervasive reasoning-language misalignment in low-resource languages (e.g., Swahili, Thai), frequently misinterpreting prompts or defaulting to English reasoning—undermining factual accuracy and interpretability. Existing benchmarks evaluate only final answers, neglecting linguistic consistency throughout the reasoning process. To address this, we propose: (1) GeoFact-X, the first multilingual geographic fact-reasoning benchmark featuring human-annotated reasoning paths across diverse languages; (2) BRIDGE, a training framework integrating supervised fine-tuning with test-time reinforcement learning, incorporating a language-consistency reward to align reasoning language with input language; and (3) an LLM-as-a-judge automated evaluation protocol for reasoning fidelity and linguistic alignment. Experiments demonstrate significant improvements in reasoning faithfulness and language consistency for low-resource languages. Our results empirically validate that “reasoning-aware” training is critical for cross-lingual generalization, highlighting the necessity of explicitly modeling reasoning-language coherence in multilingual LMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have achieved strong performance in domains like mathematics, factual QA, and code generation, yet their multilingual reasoning capabilities in these tasks remain underdeveloped. Especially for low-resource languages such as Swahili or Thai, LLMs can often misinterpret prompts or default to reasoning in English. This implicit bias toward high-resource languages undermines factual accuracy, interpretability, and trust. Current multilingual benchmarks focus only on final answers, overlooking whether models actually reason in the target language. To address this gap, we introduce GeoFact-X, a geography-based multilingual factual reasoning benchmark with annotated reasoning traces in five languages: English, Hindi, Japanese, Swahili, and Thai. We further propose BRIDGE, a novel training method that guides supervised fine-tuning and test-time reinforcement learning with a language-consistency reward to align reasoning with the input language. Finally, we develop an automatic evaluation protocol using LLM-as-a-judge to assess answer correctness and the quality and language consistency of reasoning traces, enabling nuanced and scalable analysis beyond surface-level metrics. Our results show that BRIDGE significantly enhances multilingual reasoning fidelity, demonstrating that reasoning-aware multilingual reinforcement learning is crucial for robust cross-lingual generalization. https://jd730.github.io/projects/GeoFact-X_BRIDGE
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with multilingual reasoning in low-resource languages
Current benchmarks ignore reasoning language, focusing only on answers
Need for methods to align reasoning with input language
Innovation

Methods, ideas, or system contributions that make the work stand out.

GeoFact-X benchmark with annotated multilingual reasoning traces
BRIDGE training method with language-consistency reward
Automatic evaluation using LLM-as-a-judge protocol
🔎 Similar Papers
No similar papers found.