Semantic Aware Linear Transfer by Recycling Pre-trained Language Models for Cross-lingual Transfer

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited target-language representation capability of multilingual large language models (LLMs) stemming from English-dominant pretraining, this paper proposes Semantic-Aware Linear Transfer (SALT), a parameter-efficient cross-lingual adaptation framework. SALT constructs a source-to-target vocabulary mapping via semantic similarity computed over overlapping lexical items and learns a lightweight linear projection that aligns source-language LLM embeddings into the embedding space of a target-language pretrained language model (PLM). Crucially, it reuses the PLM’s native embedding space to enhance cross-lingual understanding without fine-tuning or introducing additional trainable parameters. The method is architecture-agnostic and applicable to diverse LLM families. Experiments demonstrate that SALT significantly reduces transfer loss, accelerates convergence, and consistently outperforms existing baselines on cross-lingual understanding benchmarks. Furthermore, ablation studies confirm the strong scalability and generalizability of PLM embedding reuse across distinct LLM architectures.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) increasingly incorporate multilingual capabilities, fueling the demand to transfer them into target language-specific models. However, most approaches, which blend the source model's embedding by replacing the source vocabulary with the target language-specific vocabulary, may constrain expressive capacity in the target language since the source model is predominantly trained on English data. In this paper, we propose Semantic Aware Linear Transfer (SALT), a novel cross-lingual transfer technique that recycles embeddings from target language Pre-trained Language Models (PLMs) to transmit the deep representational strengths of PLM-derived embedding to LLMs. SALT derives unique regression lines based on the similarity in the overlap of the source and target vocabularies, to handle each non-overlapping token's embedding space. Our extensive experiments show that SALT significantly outperforms other transfer methods and achieves lower loss with accelerating faster convergence during language adaptation. Notably, SALT obtains remarkable performance in cross-lingual understanding setups compared to other methods. Furthermore, we highlight the scalable use of PLMs to enhance the functionality of contemporary LLMs by conducting experiments with varying architectures.
Problem

Research questions and friction points this paper is trying to address.

Enhancing cross-lingual transfer for multilingual LLMs
Preserving semantic strength in target language adaptation
Improving efficiency and convergence in language adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recycles target language PLM embeddings for LLMs
Uses similarity-based regression for non-overlapping tokens
Enhances cross-lingual understanding and convergence speed
🔎 Similar Papers
No similar papers found.
S
Seungyoon Lee
Korea University, Republic of Korea
Seongtae Hong
Seongtae Hong
Korea University
Natural Language Processing
Hyeonseok Moon
Hyeonseok Moon
Korea University
Neural Machine TranslationNatural Language Processing
H
Heuiseok Lim
Korea University, Republic of Korea