🤖 AI Summary
This work investigates the maximum number of features that can be linearly stored and decoded from intermediate layers of language models under the Linear Representability Hypothesis (LRH). Framing the problem as a compressed sensing task restricted to linear decoders, the study establishes the first tight bounds: the required number of neurons is lower-bounded by Ω(k² / log k · log(m/k)) and upper-bounded by O(k² log m), demonstrating that an exponential number of features can be efficiently stored under LRH. The analysis leverages tools from random matrix theory, near-isometric rank bounds, and Turán’s theorem, and is extended to decoders incorporating activation functions and biases. These results reveal that “linear accessibility” is strictly stronger than “linear representability,” providing rigorous theoretical support for the feature superposition hypothesis.
📝 Abstract
We introduce a mathematical framework for the linear representation hypothesis (LRH), which asserts that intermediate layers of language models store features linearly. We separate the hypothesis into two claims: linear representation (features are linearly embedded in neuron activations) and linear accessibility (features can be linearly decoded). We then ask: How many neurons $d$ suffice to both linearly represent and linearly access $m$ features? Classical results in compressed sensing imply that for $k$-sparse inputs, $d = O(k\log (m/k))$ suffices if we allow non-linear decoding algorithms (Candes and Tao, 2006; Candes et al., 2006; Donoho, 2006). However, the additional requirement of linear decoding takes the problem out of the classical compressed sensing, into linear compressed sensing. Our main theoretical result establishes nearly-matching upper and lower bounds for linear compressed sensing. We prove that $d = \Omega_\epsilon(\frac{k^2}{\log k}\log (m/k))$ is required while $d = O_\epsilon(k^2\log m)$ suffices. The lower bound establishes a quantitative gap between classical and linear compressed setting, illustrating how linear accessibility is a meaningfully stronger hypothesis than linear representation alone. The upper bound confirms that neurons can store an exponential number of features under the LRH, giving theoretical evidence for the"superposition hypothesis"(Elhage et al., 2022). The upper bound proof uses standard random constructions of matrices with approximately orthogonal columns. The lower bound proof uses rank bounds for near-identity matrices (Alon, 2003) together with Tur\'an's theorem (bounding the number of edges in clique-free graphs). We also show how our results do and do not constrain the geometry of feature representations and extend our results to allow decoders with an activation function and bias.