🤖 AI Summary
Short-context modeling in Machine Translation Quality Estimation (QE) yields insufficient correlation with human judgments. Method: We propose a multilingual regression framework incorporating long-context information: (i) constructing tripartite long-context training data comprising source, translation, and reference; (ii) unifying MQM, SQM, and DA annotation schemes via a weighted-average label synthesis strategy; and (iii) extending the COMET architecture with sentence concatenation and score normalization to predict Error Span Annotation (ESA) scores at the segment level. Results: Our approach achieves statistically significant improvements in correlation with human ratings across multiple benchmarks—e.g., an average +0.12 gain in Pearson correlation coefficient—outperforming short-context baselines. This demonstrates that explicit long-context modeling delivers critical gains in QE accuracy.
📝 Abstract
In this paper, we present our submission to the Tenth Conference on Machine Translation (WMT25) Shared Task on Automated Translation Quality Evaluation.
Our systems are built upon the COMET framework and trained to predict segment-level Error Span Annotation (ESA) scores using augmented long-context data.
To construct long-context training data, we concatenate in-domain, human-annotated sentences and compute a weighted average of their scores.
We integrate multiple human judgment datasets (MQM, SQM, and DA) by normalising their scales and train multilingual regression models to predict quality scores from the source, hypothesis, and reference translations.
Experimental results show that incorporating long-context information improves correlations with human judgments compared to models trained only on short segments.