Parameter-Efficient Quality Estimation via Frozen Recursive Models

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of low parameter efficiency and suboptimal performance in quality estimation for low-resource languages by proposing a highly parameter-efficient approach. Specifically, it freezes the XLM-R embeddings and replaces less effective recurrent architectures with a weight-sharing mechanism. This strategy drastically reduces the number of trainable parameters while maintaining or even surpassing the performance of larger, fully fine-tuned models. Experimental results demonstrate a Spearman correlation coefficient of 0.370 across eight language pairs—comparable to that of fine-tuned baselines—with 37 times fewer trainable parameters. Notably, on Hindi and Tamil, the proposed method achieves superior performance over MonoTransQuest using only 1/80 of its parameters, significantly enhancing both efficiency and effectiveness in low-resource quality estimation scenarios.

Technology Category

Application Category

📝 Abstract
Tiny Recursive Models (TRM) achieve strong results on reasoning tasks through iterative refinement of a shared network. We investigate whether these recursive mechanisms transfer to Quality Estimation (QE) for low-resource languages using a three-phase methodology. Experiments on $8$ language pairs on a low-resource QE dataset reveal three findings. First, TRM's recursive mechanisms do not transfer to QE. External iteration hurts performance, and internal recursion offers only narrow benefits. Next, representation quality dominates architectural choices, and lastly, frozen pretrained embeddings match fine-tuned performance while reducing trainable parameters by 37$\times$ (7M vs 262M). TRM-QE with frozen XLM-R embeddings achieves a Spearman's correlation of 0.370, matching fine-tuned variants (0.369) and outperforming an equivalent-depth standard transformer (0.336). On Hindi and Tamil, frozen TRM-QE outperforms MonoTransQuest (560M parameters) with 80$\times$ fewer trainable parameters, suggesting that weight sharing combined with frozen embeddings enables parameter efficiency for QE. We release the code publicly for further research. Code is available at https://github.com/surrey-nlp/TRMQE.
Problem

Research questions and friction points this paper is trying to address.

Quality Estimation
low-resource languages
parameter efficiency
frozen embeddings
recursive models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parameter-Efficient Learning
Quality Estimation
Frozen Embeddings
Recursive Models
Low-Resource Languages
🔎 Similar Papers
No similar papers found.