🤖 AI Summary
This study investigates whether concept representations in multilingual large language models (LLMs) are language-agnostic.
Method: Leveraging cross-lingual hidden-state activation patching and layer-wise intervention on multilingual machine translation tasks, we probe the hierarchical encoding of linguistic and conceptual information.
Contribution/Results: We provide the first empirical evidence that concept representations decouple from language-specific representations earlier in deeper Transformer layers. Furthermore, averaging cross-lingual hidden states constructs a robust universal concept space, yielding consistent translation accuracy improvements of 2.3–4.1%. These findings confirm the existence of language-invariant concept representations in mainstream multilingual LLMs and demonstrate their separability from language identity—enabling independent manipulation of concept and language dimensions. Our work thus offers novel empirical support and a methodological foundation for disentangled representation learning and controllable cross-lingual transfer.
📝 Abstract
A central question in multilingual language modeling is whether large language models (LLMs) develop a universal concept representation, disentangled from specific languages. In this paper, we address this question by analyzing latent representations (latents) during a word translation task in transformer-based LLMs. We strategically extract latents from a source translation prompt and insert them into the forward pass on a target translation prompt. By doing so, we find that the output language is encoded in the latent at an earlier layer than the concept to be translated. Building on this insight, we conduct two key experiments. First, we demonstrate that we can change the concept without changing the language and vice versa through activation patching alone. Second, we show that patching with the mean over latents across different languages does not impair and instead improves the models' performance in translating the concept. Our results provide evidence for the existence of language-agnostic concept representations within the investigated models.