🤖 AI Summary
This work investigates whether multilingual large language models (LLMs) exhibit cross-lingual disparities in mathematical reasoning and whether reasoning paths across languages are complementary. Addressing the limitations of monolingual reward modeling, we propose the first cross-lingual reward modeling framework: it jointly evaluates reasoning paths across languages via multilingual response generation and contrastive learning, and optimizes reasoning capability through multilingual response ranking. Experiments demonstrate substantial improvements in mathematical reasoning accuracy—outperforming monolingual baselines on GSM8K and MATH benchmarks. Notably, under low sampling budgets, performance on English tasks improves significantly, providing the first empirical evidence of positive cross-lingual transfer from low-resource to high-resource languages. This validates the efficacy and novelty of multilingual collaborative optimization for enhancing LLM reasoning.
📝 Abstract
While the reasoning abilities of large language models (LLMs) continue to advance, it remains unclear how such ability varies across languages in multilingual LLMs and whether different languages produce reasoning paths that complement each other. To investigate this question, we train a reward model to rank generated responses for a given question across languages. Our results show that our cross-lingual reward model substantially improves mathematical reasoning performance compared to using reward modeling within a single language, benefiting even high-resource languages. While English often exhibits the highest performance in multilingual models, we find that cross-lingual sampling particularly benefits English under low sampling budgets. Our findings reveal new opportunities to improve multilingual reasoning by leveraging the complementary strengths of diverse languages.