π€ AI Summary
Current LLM-as-a-judge methodologies exhibit strong English bias and weak multilingual automatic evaluation capabilities, hindering the development of multilingual LLMs. To address this, we propose the first systematic paradigm for training multilingual LLM judges, releasing an open-source judge series (3Bβ14B parameters) supporting direct scoring and pairwise comparison across 20+ languages. We innovatively demonstrate that natively collected multilingual human feedback data significantly outperforms machine-translated alternatives, and perform fine-tuning on open-source backbone models at multiple parameter scales. Our fully open-sourced stack includes models, data, and code. Extensive evaluation shows our judges surpass all existing open-source alternatives on multilingual reward modeling benchmarks (20+ languages) and four literary machine translation assessment suites. Furthermore, decoding-time intervention with our judges improves generation quality in three languages.
π Abstract
The use of language models for automatically evaluating long-form text (LLM-as-a-judge) is becoming increasingly common, yet most LLM judges are optimized exclusively for English, with strategies for enhancing their multilingual evaluation capabilities remaining largely unexplored in the current literature. This has created a disparity in the quality of automatic evaluation methods for non-English languages, ultimately hindering the development of models with better multilingual capabilities. To bridge this gap, we introduce M-Prometheus, a suite of open-weight LLM judges ranging from 3B to 14B parameters that can provide both direct assessment and pairwise comparison feedback on multilingual outputs. M-Prometheus models outperform state-of-the-art open LLM judges on multilingual reward benchmarks spanning more than 20 languages, as well as on literary machine translation (MT) evaluation covering 4 language pairs. Furthermore, M-Prometheus models can be leveraged at decoding time to significantly improve generated outputs across all 3 tested languages, showcasing their utility for the development of better multilingual models. Lastly, through extensive ablations, we identify the key factors for obtaining an effective multilingual judge, including backbone model selection and training on natively multilingual feedback data instead of translated data. We release our models, training dataset, and code.