🤖 AI Summary
Existing reward models (RMs) heavily rely on English data, and their multilingual capabilities remain systematically unassessed. To address this gap, we introduce M-RewardBench—the first benchmark for evaluating RMs in multilingual settings—comprising 23 languages and 2,870 high-quality preference samples, assessing cross-lingual generalization across four dimensions: chit-chat, safety, reasoning, and translation. Through large-scale empirical analysis, we首次 reveal substantial performance degradation of RMs on non-English languages and cross-lingual preference shifts, identifying translation quality and language-resource abundance as key determinants. Our study confirms pronounced English-centric bias in mainstream RMs. M-RewardBench is publicly released to support research on multilingual alignment, fairness, and robustness.
📝 Abstract
Reward models (RMs) have driven the state-of-the-art performance of LLMs today by enabling the integration of human feedback into the language modeling process. However, RMs are primarily trained and evaluated in English, and their capabilities in multilingual settings remain largely understudied. In this work, we conduct a systematic evaluation of several reward models in multilingual settings. We first construct the first-of-its-kind multilingual RM evaluation benchmark, M-RewardBench, consisting of 2.87k preference instances for 23 typologically diverse languages, that tests the chat, safety, reasoning, and translation capabilities of RMs. We then rigorously evaluate a wide range of reward models on M-RewardBench, offering fresh insights into their performance across diverse languages. We identify a significant gap in RMs' performances between English and non-English languages and show that RM preferences can change substantially from one language to another. We also present several findings on how different multilingual aspects impact RM performance. Specifically, we show that the performance of RMs is improved with improved translation quality. Similarly, we demonstrate that the models exhibit better performance for high-resource languages. We release M-RewardBench dataset and the codebase in this study to facilitate a better understanding of RM evaluation in multilingual settings.