Not All Language Model Features Are One-Dimensionally Linear

📅 2024-05-23
📈 Citations: 40
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the prevailing assumption that language models encode concepts using only one-dimensional features, investigating whether irreducible multidimensional neural representations exist internally. Method: We formally define irreducible multidimensional features and propose a scalable discovery method based on sparse autoencoders, complemented by causal intervention experiments, continuity analysis, and concept decoupling evaluation. Contribution/Results: Applying this framework to GPT-2, Mistral 7B, and Llama 3 8B, we systematically identify highly interpretable circular multidimensional representations—such as those capturing diurnal and lunar periodicities. Empirical evidence demonstrates that these circular structures function as causal computational units for modular arithmetic tasks, exhibiting both geometric continuity in representation space and functional necessity. Our findings shift large-model interpretability from linear feature attribution toward geometric structural modeling, establishing a new paradigm for probing internal computational mechanisms.

Technology Category

Application Category

📝 Abstract
Recent work has proposed that language models perform computation by manipulating one-dimensional representations of concepts ("features") in activation space. In contrast, we explore whether some language model representations may be inherently multi-dimensional. We begin by developing a rigorous definition of irreducible multi-dimensional features based on whether they can be decomposed into either independent or non-co-occurring lower-dimensional features. Motivated by these definitions, we design a scalable method that uses sparse autoencoders to automatically find multi-dimensional features in GPT-2 and Mistral 7B. These auto-discovered features include strikingly interpretable examples, e.g. circular features representing days of the week and months of the year. We identify tasks where these exact circles are used to solve computational problems involving modular arithmetic in days of the week and months of the year. Next, we provide evidence that these circular features are indeed the fundamental unit of computation in these tasks with intervention experiments on Mistral 7B and Llama 3 8B, and we examine the continuity of the days of the week feature in Mistral 7B. Overall, our work argues that understanding multi-dimensional features is necessary to mechanistically decompose some model behaviors.
Problem

Research questions and friction points this paper is trying to address.

Explore multi-dimensional features in language models
Develop method to find multi-dimensional features
Provide evidence for multi-dimensional computational units
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-dimensional feature definition developed
Sparse autoencoders used for feature discovery
Circular features identified in model computations
🔎 Similar Papers
No similar papers found.