TRepLiNa: Layer-wise CKA+REPINA Alignment Improves Low-Resource Machine Translation in Aya-23 8B

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of data scarcity and inconsistent cross-lingual representations in machine translation for Indian low-resource languages (LRLs), this paper proposes a layer-wise cross-lingual representation alignment method tailored for the decoder-only multilingual large language model Aya-23 (8B). Our approach innovatively integrates centered kernel alignment (CKA) with parameter regularization (REPINA), imposing cross-lingual similarity constraints at specific intermediate decoder layers, and employs QLoRA for efficient fine-tuning. Crucially, it requires no additional parallel data. Empirical results demonstrate substantial improvements in translation quality—from Mundari, Santali, and Bhili to English—across zero-shot, few-shot, and full fine-tuning settings, with particularly strong gains under extremely low-resource conditions. This work establishes a scalable, parameter-efficient paradigm for cross-lingual representation alignment in low-resource machine translation.

Technology Category

Application Category

📝 Abstract
The 2025 Multimodal Models for Low-Resource Contexts and Social Impact (MMLoSo) Language Challenge addresses one of India's most pressing linguistic gaps: the lack of resources for its diverse low-resource languages (LRLs). In this study, we investigate whether enforcing cross-lingual similarity in specific internal layers of a decoder-only multilingual large language model (LLM) can improve translation quality from LRL to high-resource language (HRL). Specifically, we combine Centered Kernel Alignment (CKA), a similarity metric that encourages representations of different languages to align, with REPINA, a regularization method that constrains parameter updates to remain close to the pretrained model, into a joint method we call TRepLiNa. In this research project, we experiment with zero-shot, few-shot, and fine-tuning settings using Aya-23 8B with QLoRA across MMLoSo shared task language pairs (Mundari, Santali, Bhili) with Hindi/English pivots. Our results show that aligning mid-level layers using TRepLiNa (CKA+REPINA) is a low-cost, practical approach to improving LRL translation, especially in data-scarce settings.
Problem

Research questions and friction points this paper is trying to address.

Improving low-resource language translation using layer-wise alignment
Enforcing cross-lingual similarity in multilingual LLM internal layers
Addressing translation quality for diverse Indian low-resource languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining CKA and REPINA for cross-lingual alignment
Aligning mid-level layers to improve translation quality
Using QLoRA with Aya-23 8B for low-resource languages
🔎 Similar Papers
No similar papers found.