M-Prometheus: A Suite of Open Multilingual LLM Judges

πŸ“… 2025-04-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current LLM-as-a-judge methodologies exhibit strong English bias and weak multilingual automatic evaluation capabilities, hindering the development of multilingual LLMs. To address this, we propose the first systematic paradigm for training multilingual LLM judges, releasing an open-source judge series (3B–14B parameters) supporting direct scoring and pairwise comparison across 20+ languages. We innovatively demonstrate that natively collected multilingual human feedback data significantly outperforms machine-translated alternatives, and perform fine-tuning on open-source backbone models at multiple parameter scales. Our fully open-sourced stack includes models, data, and code. Extensive evaluation shows our judges surpass all existing open-source alternatives on multilingual reward modeling benchmarks (20+ languages) and four literary machine translation assessment suites. Furthermore, decoding-time intervention with our judges improves generation quality in three languages.

Technology Category

Application Category

πŸ“ Abstract
The use of language models for automatically evaluating long-form text (LLM-as-a-judge) is becoming increasingly common, yet most LLM judges are optimized exclusively for English, with strategies for enhancing their multilingual evaluation capabilities remaining largely unexplored in the current literature. This has created a disparity in the quality of automatic evaluation methods for non-English languages, ultimately hindering the development of models with better multilingual capabilities. To bridge this gap, we introduce M-Prometheus, a suite of open-weight LLM judges ranging from 3B to 14B parameters that can provide both direct assessment and pairwise comparison feedback on multilingual outputs. M-Prometheus models outperform state-of-the-art open LLM judges on multilingual reward benchmarks spanning more than 20 languages, as well as on literary machine translation (MT) evaluation covering 4 language pairs. Furthermore, M-Prometheus models can be leveraged at decoding time to significantly improve generated outputs across all 3 tested languages, showcasing their utility for the development of better multilingual models. Lastly, through extensive ablations, we identify the key factors for obtaining an effective multilingual judge, including backbone model selection and training on natively multilingual feedback data instead of translated data. We release our models, training dataset, and code.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multilingual evaluation capabilities of LLM judges
Bridging quality disparity in non-English automatic evaluation methods
Developing better multilingual models through effective evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-weight LLM judges for multilingual evaluation
Training on natively multilingual feedback data
Decoding-time leverage for output improvement
πŸ”Ž Similar Papers
No similar papers found.