Separating Tongue from Thought: Activation Patching Reveals Language-Agnostic Concept Representations in Transformers

📅 2024-11-13
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether concept representations in multilingual large language models (LLMs) are language-agnostic. Method: Leveraging cross-lingual hidden-state activation patching and layer-wise intervention on multilingual machine translation tasks, we probe the hierarchical encoding of linguistic and conceptual information. Contribution/Results: We provide the first empirical evidence that concept representations decouple from language-specific representations earlier in deeper Transformer layers. Furthermore, averaging cross-lingual hidden states constructs a robust universal concept space, yielding consistent translation accuracy improvements of 2.3–4.1%. These findings confirm the existence of language-invariant concept representations in mainstream multilingual LLMs and demonstrate their separability from language identity—enabling independent manipulation of concept and language dimensions. Our work thus offers novel empirical support and a methodological foundation for disentangled representation learning and controllable cross-lingual transfer.

Technology Category

Application Category

📝 Abstract
A central question in multilingual language modeling is whether large language models (LLMs) develop a universal concept representation, disentangled from specific languages. In this paper, we address this question by analyzing latent representations (latents) during a word translation task in transformer-based LLMs. We strategically extract latents from a source translation prompt and insert them into the forward pass on a target translation prompt. By doing so, we find that the output language is encoded in the latent at an earlier layer than the concept to be translated. Building on this insight, we conduct two key experiments. First, we demonstrate that we can change the concept without changing the language and vice versa through activation patching alone. Second, we show that patching with the mean over latents across different languages does not impair and instead improves the models' performance in translating the concept. Our results provide evidence for the existence of language-agnostic concept representations within the investigated models.
Problem

Research questions and friction points this paper is trying to address.

Multilingual Learning
Concept Understanding
Cross-lingual Representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Activation Patching
Cross-lingual Concept Representation
Multilingual Fine-tuning
🔎 Similar Papers
No similar papers found.