Language Fusion for Parameter-Efficient Cross-lingual Transfer

📅 2025-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multilingual large language models (LLMs) suffer from weak cross-lingual representations and poor transfer performance due to insufficient non-English pretraining data, while existing cross-lingual adaptation methods incur substantial computational overhead. To address this, we propose a lightweight LoRA-based intra-adaptor language representation fusion method. Our approach innovatively incorporates source–target language representation alignment and interaction mechanisms directly into the low-rank linear transformation—without token mixing or parameter expansion—thereby achieving both high parameter efficiency and improved cross-lingual representation quality. Empirical evaluation on multilingual natural language understanding (NLU) tasks demonstrates significant gains: Llama 3.1 and Gemma 2 achieve +4.9% and +2.2% improvements in Exact Match on question answering, respectively, outperforming standard LoRA fine-tuning. This work introduces the first LoRA adapter architecture that enables *intra-adaptor* cross-lingual feature fusion, establishing a novel paradigm for efficient multilingual adaptation.

Technology Category

Application Category

📝 Abstract
Limited availability of multilingual text corpora for training language models often leads to poor performance on downstream tasks due to undertrained representation spaces for languages other than English. This 'under-representation' has motivated recent cross-lingual transfer methods to leverage the English representation space by e.g. mixing English and 'non-English' tokens at the input level or extending model parameters to accommodate new languages. However, these approaches often come at the cost of increased computational complexity. We propose Fusion forLanguage Representations (FLARE) in adapters, a novel method that enhances representation quality and downstream performance for languages other than English while maintaining parameter efficiency. FLARE integrates source and target language representations within low-rank (LoRA) adapters using lightweight linear transformations, maintaining parameter efficiency while improving transfer performance. A series of experiments across representative cross-lingual natural language understanding tasks, including natural language inference, question-answering and sentiment analysis, demonstrate FLARE's effectiveness. FLARE achieves performance improvements of 4.9% for Llama 3.1 and 2.2% for Gemma~2 compared to standard LoRA fine-tuning on question-answering tasks, as measured by the exact match metric.
Problem

Research questions and friction points this paper is trying to address.

Multilingual Environment
Insufficient Training Data
Computational Resource Consumption
Innovation

Methods, ideas, or system contributions that make the work stand out.

FLARE
Multilingual Performance Enhancement
Cross-lingual Tasks Optimization
🔎 Similar Papers
No similar papers found.