GRRM: Group Relative Reward Modeling for Machine Translation

πŸ“… 2026-02-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Traditional scalar quality metrics struggle to perform fine-grained relative ranking among groups of candidate translations in machine translation due to the absence of contextual comparison mechanisms. This work proposes a Group Quality Metric (GQM) paradigm that introduces, for the first time, an intra-group joint comparison mechanism to construct a Group Relative Reward Model (GRRM). By jointly modeling all candidates within a group, GRRM enables adaptive-granularity relative quality assessment and is integrated into a Group Relative Policy Optimization (GRPO) training framework. Experimental results demonstrate that GRRM significantly outperforms existing baselines in ranking accuracy. The proposed approach not only enhances translation quality but also exhibits relative judgment capabilities comparable to state-of-the-art inference models.

Technology Category

Application Category

πŸ“ Abstract
While Group Relative Policy Optimization (GRPO) offers a powerful framework for LLM post-training, its effectiveness in open-ended domains like Machine Translation hinges on accurate intra-group ranking. We identify that standard Scalar Quality Metrics (SQM) fall short in this context; by evaluating candidates in isolation, they lack the comparative context necessary to distinguish fine-grained linguistic nuances. To address this, we introduce the Group Quality Metric (GQM) paradigm and instantiate it via the Group Relative Reward Model (GRRM). Unlike traditional independent scorers, GRRM processes the entire candidate group jointly, leveraging comparative analysis to rigorously resolve relative quality and adaptive granularity. Empirical evaluations confirm that GRRM achieves competitive ranking accuracy among all baselines. Building on this foundation, we integrate GRRM into the GRPO training loop to optimize the translation policy. Experimental results demonstrate that our framework not only improves general translation quality but also unlocks reasoning capabilities comparable to state-of-the-art reasoning models. We release codes, datasets, and model checkpoints at https://github.com/NJUNLP/GRRM.
Problem

Research questions and friction points this paper is trying to address.

Machine Translation
Group Relative Policy Optimization
Scalar Quality Metrics
Intra-group Ranking
Comparative Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Group Relative Reward Modeling
Machine Translation
Group Quality Metric
Relative Ranking
Policy Optimization
πŸ”Ž Similar Papers
No similar papers found.