🤖 AI Summary
This study investigates whether internal representations in multilingual large language models are governed primarily by abstract linguistic structures or surface-level features such as writing systems. To address this, the authors introduce a novel combination of Language Activation Probability Entropy (LAPE) and sparse autoencoders, complemented by word-order perturbations, romanization interventions, probing analyses, and causal mediation techniques. The findings reveal that model representations are strongly shaped by orthography—romanization nearly eliminates representational overlap across languages. While typological similarities gradually emerge in deeper layers, they do not coalesce into a unified interlingua. Moreover, generation performance is largely sustained by neurons invariant to surface-form perturbations. These results underscore the dominant role of script systems in multilingual representation learning and challenge prevailing assumptions that prioritize abstract linguistic structure.
📝 Abstract
Multilingual language models (LMs) organize representations for typologically and orthographically diverse languages into a shared parameter space, yet the nature of this internal organization remains elusive. In this work, we investigate which linguistic properties - abstract language identity or surface-form cues - shape multilingual representations. Focusing on compact, distilled models where representational trade-offs are explicit, we analyze language-associated units in Llama-3.2-1B and Gemma-2-2B using the Language Activation Probability Entropy (LAPE) metric, and further decompose activations with Sparse Autoencoders. We find that these units are strongly conditioned on orthography: romanization induces near-disjoint representations that align with neither native-script inputs nor English, while word-order shuffling has limited effect on unit identity. Probing shows that typological structure becomes increasingly accessible in deeper layers, while causal interventions indicate that generation is most sensitive to units that are invariant to surface-form perturbations rather than to units identified by typological alignment alone. Overall, our results suggest that multilingual LMs organize representations around surface form, with linguistic abstraction emerging gradually without collapsing into a unified interlingua.