🤖 AI Summary
Large language models (LLMs) exhibit pervasive reasoning-language misalignment in low-resource languages (e.g., Swahili, Thai), frequently misinterpreting prompts or defaulting to English reasoning—undermining factual accuracy and interpretability. Existing benchmarks evaluate only final answers, neglecting linguistic consistency throughout the reasoning process. To address this, we propose: (1) GeoFact-X, the first multilingual geographic fact-reasoning benchmark featuring human-annotated reasoning paths across diverse languages; (2) BRIDGE, a training framework integrating supervised fine-tuning with test-time reinforcement learning, incorporating a language-consistency reward to align reasoning language with input language; and (3) an LLM-as-a-judge automated evaluation protocol for reasoning fidelity and linguistic alignment. Experiments demonstrate significant improvements in reasoning faithfulness and language consistency for low-resource languages. Our results empirically validate that “reasoning-aware” training is critical for cross-lingual generalization, highlighting the necessity of explicitly modeling reasoning-language coherence in multilingual LMs.
📝 Abstract
Large Language Models (LLMs) have achieved strong performance in domains like mathematics, factual QA, and code generation, yet their multilingual reasoning capabilities in these tasks remain underdeveloped. Especially for low-resource languages such as Swahili or Thai, LLMs can often misinterpret prompts or default to reasoning in English. This implicit bias toward high-resource languages undermines factual accuracy, interpretability, and trust. Current multilingual benchmarks focus only on final answers, overlooking whether models actually reason in the target language. To address this gap, we introduce GeoFact-X, a geography-based multilingual factual reasoning benchmark with annotated reasoning traces in five languages: English, Hindi, Japanese, Swahili, and Thai. We further propose BRIDGE, a novel training method that guides supervised fine-tuning and test-time reinforcement learning with a language-consistency reward to align reasoning with the input language. Finally, we develop an automatic evaluation protocol using LLM-as-a-judge to assess answer correctness and the quality and language consistency of reasoning traces, enabling nuanced and scalable analysis beyond surface-level metrics. Our results show that BRIDGE significantly enhances multilingual reasoning fidelity, demonstrating that reasoning-aware multilingual reinforcement learning is crucial for robust cross-lingual generalization. https://jd730.github.io/projects/GeoFact-X_BRIDGE