Crosslingual Optimized Metric for Translation Assessment of Indian Languages

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of human-annotated evaluation data and the poor generalization of existing automatic metrics (e.g., BLEU and neural metrics) for Indian language translation, this paper introduces COMTAIL, a cross-lingual neural evaluation model, and the first large-scale human-rated dataset covering 13 Indian languages and 21 translation directions. COMTAIL leverages a Transformer architecture enhanced with multilingual embedding alignment, transfer learning, and contrastive learning to significantly improve cross-lingual semantic matching—especially under low-resource conditions. Ablation studies demonstrate its strong sensitivity and interpretability across language families, domains, and quality levels. Empirical results show that COMTAIL substantially outperforms state-of-the-art metrics on multiple Indian language pairs. Both the dataset and model are publicly released, establishing a new benchmark and practical tool for machine translation evaluation in low-resource languages.

Technology Category

Application Category

📝 Abstract
Automatic evaluation of translation remains a challenging task owing to the orthographic, morphological, syntactic and semantic richness and divergence observed across languages. String-based metrics such as BLEU have previously been extensively used for automatic evaluation tasks, but their limitations are now increasingly recognized. Although learned neural metrics have helped mitigate some of the limitations of string-based approaches, they remain constrained by a paucity of gold evaluation data in most languages beyond the usual high-resource pairs. In this present work we address some of these gaps. We create a large human evaluation ratings dataset for 13 Indian languages covering 21 translation directions and then train a neural translation evaluation metric named Cross-lingual Optimized Metric for Translation Assessment of Indian Languages (COMTAIL) on this dataset. The best performing metric variants show significant performance gains over previous state-of-the-art when adjudging translation pairs with at least one Indian language. Furthermore, we conduct a series of ablation studies to highlight the sensitivities of such a metric to changes in domain, translation quality, and language groupings. We release both the COMTAIL dataset and the accompanying metric models.
Problem

Research questions and friction points this paper is trying to address.

Lack of reliable automatic evaluation metrics for Indian language translations
Limited human evaluation data for low-resource Indian language pairs
Insufficient assessment tools for diverse translation directions across 13 Indian languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created large human evaluation dataset for 13 Indian languages
Trained neural metric COMTAIL on multi-language translation pairs
Conducted ablation studies for domain and quality sensitivity
🔎 Similar Papers
No similar papers found.
A
Arafat Ahsan
Language Technologies Research Centre, International Institute of Information Technology, Hyderabad, India.
Vandan Mujadia
Vandan Mujadia
IIIT-Hyderabad
NLPMTML
Pruthwik Mishra
Pruthwik Mishra
SVNIT, Surat
MLNLPCLMTWord Problem Solving
Y
Yash Bhaskar
Language Technologies Research Centre, International Institute of Information Technology, Hyderabad, India.
D
Dipti Misra Sharma
Language Technologies Research Centre, International Institute of Information Technology, Hyderabad, India.