🤖 AI Summary
To address performance bottlenecks in Large Language Model (LLM)-based Quality Estimation (QE) for machine translation—stemming from pretraining objective mismatch and imbalanced cross-lingual data distributions—this paper proposes a layer-level adaptive optimization framework. Built upon Low-Rank Adaptation (LoRA), the method dynamically selects and weights intermediate Transformer layer representations, integrating them via multi-head regression loss aggregation to strengthen cross-lingual alignment and reference-free quality prediction. Empirical results demonstrate that intermediate-layer representations are inherently more suitable for QE tasks. The proposed approach consistently outperforms state-of-the-art QE models across diverse LLM backbones, with particularly substantial gains on low-resource language pairs. To foster reproducibility and further research, the source code and trained models are publicly released.
📝 Abstract
Large Language Models (LLMs) have shown remarkable performance across a wide range of natural language processing tasks. Quality Estimation (QE) for Machine Translation (MT), which assesses the quality of a source-target pair without relying on reference translations, remains a challenging cross-lingual task for LLMs. The challenges stem from the inherent limitations of existing LLM-based QE systems, which are pre-trained for causal language modelling rather than regression-specific tasks, further elevated by the presence of low-resource languages given pre-training data distribution. This paper introduces ALOPE, an adaptive layer-optimization framework designed to enhance LLM-based QE by restructuring Transformer representations through layer-wise adaptation for improved regression-based prediction. Our framework integrates low-rank adapters (LoRA) with regression task heads, leveraging selected pre-trained Transformer layers for improved cross-lingual alignment. In addition to the layer-specific adaptation, ALOPE introduces two strategies-dynamic weighting, which adaptively combines representations from multiple layers, and multi-head regression, which aggregates regression losses from multiple heads for QE. Our framework shows improvements over various existing LLM-based QE approaches. Empirical evidence suggests that intermediate Transformer layers in LLMs provide contextual representations that are more aligned with the cross-lingual nature of the QE task. We make resultant models and framework code publicly available for further research, also allowing existing LLM-based MT frameworks to be scaled with QE capabilities.