Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance Boost

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Can large reasoning models (LRMs) reliably assess machine translation (MT) quality? This work systematically investigates this question for the first time, identifying critical issues: overthinking, systematic score inflation, excessive computational overhead, and anthropomorphic judgment biases. To address these, we propose three innovations: (1) synthetic human-like reasoning trajectory training to calibrate inference paths; (2) dynamic score calibration to mitigate systematic overestimation; and (3) thought-budget optimization to constrain reasoning depth. Evaluated on the WMT24 Metrics benchmark, our approach reduces inference cost of 7B–32B LRMs by ~35× while improving R1-Distill-Qwen-7B’s correlation by +8.7 percentage points. It significantly outperforms existing MT evaluation paradigms, establishing a new, trustworthy framework and scalable technical pathway for deploying LRMs in MT assessment.

Technology Category

Application Category

📝 Abstract
Recent advancements in large reasoning models (LRMs) have introduced an intermediate"thinking"process prior to generating final answers, improving their reasoning capabilities on complex downstream tasks. However, the potential of LRMs as evaluators for machine translation (MT) quality remains underexplored. We provides the first systematic analysis of LRM-as-a-judge in MT evaluation. We identify key challenges, revealing LRMs require tailored evaluation materials, tend to"overthink"simpler instances and have issues with scoring mechanisms leading to overestimation. To address these, we propose to calibrate LRM thinking by training them on synthetic, human-like thinking trajectories. Our experiments on WMT24 Metrics benchmarks demonstrate that this approach largely reduces thinking budgets by ~35x while concurrently improving evaluation performance across different LRM scales from 7B to 32B (e.g., R1-Distill-Qwen-7B achieves a +8.7 correlation point improvement). These findings highlight the potential of efficiently calibrated LRMs to advance fine-grained automatic MT evaluation.
Problem

Research questions and friction points this paper is trying to address.

Analyzing large reasoning models' effectiveness as machine translation evaluators
Identifying challenges like overthinking and scoring overestimation in evaluation
Proposing calibration methods to improve evaluation performance and efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Calibrated LRMs using synthetic human-like thinking trajectories
Reduced thinking budgets by 35x while improving performance
Enhanced evaluation across different LRM scales from 7B to 32B
🔎 Similar Papers
No similar papers found.