🤖 AI Summary
To address the scarcity of human-annotated evaluation data and the poor generalization of existing automatic metrics (e.g., BLEU and neural metrics) for Indian language translation, this paper introduces COMTAIL, a cross-lingual neural evaluation model, and the first large-scale human-rated dataset covering 13 Indian languages and 21 translation directions. COMTAIL leverages a Transformer architecture enhanced with multilingual embedding alignment, transfer learning, and contrastive learning to significantly improve cross-lingual semantic matching—especially under low-resource conditions. Ablation studies demonstrate its strong sensitivity and interpretability across language families, domains, and quality levels. Empirical results show that COMTAIL substantially outperforms state-of-the-art metrics on multiple Indian language pairs. Both the dataset and model are publicly released, establishing a new benchmark and practical tool for machine translation evaluation in low-resource languages.
📝 Abstract
Automatic evaluation of translation remains a challenging task owing to the orthographic, morphological, syntactic and semantic richness and divergence observed across languages. String-based metrics such as BLEU have previously been extensively used for automatic evaluation tasks, but their limitations are now increasingly recognized. Although learned neural metrics have helped mitigate some of the limitations of string-based approaches, they remain constrained by a paucity of gold evaluation data in most languages beyond the usual high-resource pairs. In this present work we address some of these gaps. We create a large human evaluation ratings dataset for 13 Indian languages covering 21 translation directions and then train a neural translation evaluation metric named Cross-lingual Optimized Metric for Translation Assessment of Indian Languages (COMTAIL) on this dataset. The best performing metric variants show significant performance gains over previous state-of-the-art when adjudging translation pairs with at least one Indian language. Furthermore, we conduct a series of ablation studies to highlight the sensitivities of such a metric to changes in domain, translation quality, and language groupings. We release both the COMTAIL dataset and the accompanying metric models.