LayAlign: Enhancing Multilingual Reasoning in Large Language Models via Layer-Wise Adaptive Fusion and Alignment Strategy

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited reasoning performance of large language models (LLMs) on low-resource languages, this paper proposes LayAlign—a layer-adaptive fusion and alignment framework that, for the first time, fully integrates multilingual encoder representations across all layers into an LLM. Methodologically, LayAlign introduces layer-aware adapters and a cross-model layer-wise attention mechanism (LayAtt) to model fine-grained interactions between each encoder layer and intermediate LLM layers, followed by joint fine-tuning of both components. Experiments demonstrate that LayAlign significantly outperforms state-of-the-art baselines on multilingual reasoning tasks. Representation analysis further confirms that cross-layer semantic alignment effectively enhances deep semantic understanding for low-resource languages, validating the framework’s capacity to bridge representational gaps across architectures and languages.

Technology Category

Application Category

📝 Abstract
Despite being pretrained on multilingual corpora, large language models (LLMs) exhibit suboptimal performance on low-resource languages. Recent approaches have leveraged multilingual encoders alongside LLMs by introducing trainable parameters connecting the two models. However, these methods typically focus on the encoder's output, overlooking valuable information from other layers. We propose aname (mname), a framework that integrates representations from all encoder layers, coupled with the attaname mechanism to enable layer-wise interaction between the LLM and the multilingual encoder. Extensive experiments on multilingual reasoning tasks, along with analyses of learned representations, show that our approach consistently outperforms existing baselines.
Problem

Research questions and friction points this paper is trying to address.

Enhance multilingual reasoning in LLMs
Address low-resource language performance
Integrate all encoder layer representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise adaptive fusion
Multilingual encoder integration
Enhanced multilingual reasoning
🔎 Similar Papers
No similar papers found.