High-Dimensional Interlingual Representations of Large Language Models

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether multilingual large language models (MLLMs) develop unified cross-lingual semantic representations. We find that cross-lingual alignment is highly inconsistent—exhibiting a “fragmented” subspace structure rather than global uniformity. To address this, we propose a decomposable interlingual representation framework and introduce Interlingual Local Overlap (ILO), a novel quantitative metric that enables fine-grained characterization of local alignment for the first time. We further reveal that monolingual fine-tuning disrupts cross-lingual alignment by perturbing early-layer activations, and demonstrate that freezing early layers effectively preserves alignment while enhancing generalization. Empirical validation across 31 languages shows that ILO strongly correlates with downstream cross-lingual transfer performance; freezing early layers improves average cross-lingual transfer accuracy by 12.7%. Our findings provide both theoretical insight into the nature of interlingual representation in MLLMs and practical guidance for improving multilingual transferability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) trained on massive multilingual datasets hint at the formation of interlingual constructs--a shared subspace in the representation space. However, evidence regarding this phenomenon is mixed, leaving it unclear whether these models truly develop unified interlingual representations, or present a partially aligned constructs. We explore 31 diverse languages varying on their resource-levels, typologies, and geographical regions; and find that multilingual LLMs exhibit inconsistent cross-lingual alignments. To address this, we propose an interlingual representation framework identifying both the shared interlingual semantic subspace and fragmented components, existed due to representational limitations. We introduce Interlingual Local Overlap (ILO) score to quantify interlingual alignment by comparing the local neighborhood structures of high-dimensional representations. We utilize ILO to investigate the impact of single-language fine-tuning on the interlingual representations in multilingual LLMs. Our results indicate that training exclusively on a single language disrupts the alignment in early layers, while freezing these layers preserves the alignment of interlingual representations, leading to improved cross-lingual generalization. These results validate our framework and metric for evaluating interlingual representation, and further underscore that interlingual alignment is crucial for scalable multilingual learning.
Problem

Research questions and friction points this paper is trying to address.

Assessing inconsistent cross-lingual alignments in multilingual LLMs.
Proposing a framework to identify shared and fragmented interlingual representations.
Evaluating the impact of single-language fine-tuning on interlingual alignment.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interlingual Local Overlap score quantifies alignment.
Framework identifies shared and fragmented semantic subspaces.
Freezing early layers preserves interlingual alignment.
🔎 Similar Papers
No similar papers found.