Gained in Translation: Privileged Pairwise Judges Enhance Multilingual Reasoning

๐Ÿ“… 2026-01-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the significant degradation of reasoning performance in large language models when applied to low-resource languages, which hinders cross-lingual consistency in reasoning capabilities. To tackle this challenge, the authors propose the SP3F framework, which first applies supervised fine-tuning on translated versions of English question-answer pairs and then introduces a pairwise discriminator in self-play reinforcement learning that leverages English reference answers as privileged informationโ€”a novel approach that enables effective guidance without any target-language data. Evaluated on both mathematical and non-mathematical tasks, SP3F achieves substantial improvements over fully post-trained models using only minimal training data, demonstrating strong gains across monolingual, multilingual, and unseen-language settings. These results highlight the critical role of privileged information and self-play mechanisms in enabling effective cross-lingual transfer of reasoning abilities.

Technology Category

Application Category

๐Ÿ“ Abstract
When asked a question in a language less seen in its training data, current reasoning large language models (RLMs) often exhibit dramatically lower performance than when asked the same question in English. In response, we introduce \texttt{SP3F} (Self-Play with Privileged Pairwise Feedback), a two-stage framework for enhancing multilingual reasoning without \textit{any} data in the target language(s). First, we supervise fine-tune (SFT) on translated versions of English question-answer pairs to raise base model correctness. Second, we perform RL with feedback from a pairwise judge in a self-play fashion, with the judge receiving the English reference response as \textit{privileged information}. Thus, even when none of the model's responses are completely correct, the privileged pairwise judge can still tell which response is better. End-to-end, \texttt{SP3F} greatly improves base model performance, even outperforming fully post-trained models on multiple math and non-math tasks with less than of the training data across the single-language, multilingual, and generalization to unseen language settings.
Problem

Research questions and friction points this paper is trying to address.

multilingual reasoning
large language models
low-resource languages
reasoning performance
language disparity
Innovation

Methods, ideas, or system contributions that make the work stand out.

multilingual reasoning
privileged pairwise feedback
self-play reinforcement learning
zero-resource language adaptation
large language models
๐Ÿ”Ž Similar Papers
No similar papers found.