Focusing on Language: Revealing and Exploiting Language Attention Heads in Multilingual Large Language Models

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the intrinsic mechanisms by which multi-head self-attention (MHA) in multilingual large language models (LLMs) enables cross-lingual transfer. To this end, we propose a gradient-based sensitivity metric to quantify the importance of individual attention heads per language—yielding the first empirical evidence of coexisting language-specific and language-general attention heads. Building on this insight, we design a lightweight soft attention head masking mechanism with only 20 trainable parameters, enabling fine-grained language-aware attention redistribution and effective suppression of non-target language generation. We validate our approach on Aya-23-8B, Llama-3.2-3B, and Mistral-7B-v0.1, demonstrating significant improvements in XQuAD accuracy while simultaneously enhancing cross-lingual understanding capabilities and model interpretability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) increasingly support multilingual understanding and generation. Meanwhile, efforts to interpret their internal mechanisms have emerged, offering insights to enhance multilingual performance. While multi-head self-attention (MHA) has proven critical in many areas, its role in multilingual capabilities remains underexplored. In this work, we study the contribution of MHA in supporting multilingual processing in LLMs. We propose Language Attention Head Importance Scores (LAHIS), an effective and efficient method that identifies attention head importance for multilingual capabilities via a single forward and backward pass through the LLM. Applying LAHIS to Aya-23-8B, Llama-3.2-3B, and Mistral-7B-v0.1, we reveal the existence of both language-specific and language-general heads. Language-specific heads enable cross-lingual attention transfer to guide the model toward target language contexts and mitigate off-target language generation issue, contributing to addressing challenges in multilingual LLMs. We also introduce a lightweight adaptation that learns a soft head mask to modulate attention outputs over language heads, requiring only 20 tunable parameters to improve XQuAD accuracy. Overall, our work enhances both the interpretability and multilingual capabilities of LLMs from the perspective of MHA.
Problem

Research questions and friction points this paper is trying to address.

Investigating attention head roles in multilingual large language models
Identifying language-specific and general heads for cross-lingual transfer
Developing lightweight adaptation to improve multilingual question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

LAHIS method identifies multilingual attention head importance
Language-specific heads enable cross-lingual attention transfer
Lightweight adaptation learns soft mask with few parameters
🔎 Similar Papers
No similar papers found.
X
Xin Liu
Institute of Information Engineering, Chinese Academy of Sciences
Q
Qiyang Song
Institute of Information Engineering, Chinese Academy of Sciences
Qihang Zhou
Qihang Zhou
Zhejiang University
Anomaly detectionVision language modelPrompt learning
H
Haichao Du
Institute of Information Engineering, Chinese Academy of Sciences
S
Shaowen Xu
Institute of Information Engineering, Chinese Academy of Sciences
Wenbo Jiang
Wenbo Jiang
University of Electronic Science and Technology of China
AI securityBackdoor attack
W
Weijuan Zhang
Institute of Information Engineering, Chinese Academy of Sciences
Xiaoqi Jia
Xiaoqi Jia
Institute of Information Engineering, CAS