🤖 AI Summary
The fundamental units of internal representation in large language models (LLMs) remain undefined; individual neurons exhibit semantic polysemy, and conventional feature reconstruction is unreliable and unstable.
Method: We propose the “atom” theory—the first formal definition of stable, interpretable elementary representation units in LLMs. Leveraging compressed sensing theory, we prove that atoms satisfy the restricted isometry property (RIP) and ℓ₁-recoverability, and establish theoretical guarantees for sparse autoencoders to reliably identify atoms under thresholded activation. Our method employs a single-layer sparse autoencoder augmented with atom inner-product correction to mitigate representation bias, ensuring both stability and uniqueness of sparse codes.
Results: Evaluated on Gemma and Llama series models, our approach achieves an average sparse reconstruction accuracy of 99.9%, with over 99.8% of atoms satisfying uniqueness—substantially outperforming neuron- and traditional feature-based representations.
📝 Abstract
The fundamental units of internal representations in large language models (LLMs) remain undefined, limiting further understanding of their mechanisms. Neurons or features are often regarded as such units, yet neurons suffer from polysemy, while features face concerns of unreliable reconstruction and instability. To address this issue, we propose the Atoms Theory, which defines such units as atoms. We introduce the atomic inner product (AIP) to correct representation shifting, formally define atoms, and prove the conditions that atoms satisfy the Restricted Isometry Property (RIP), ensuring stable sparse representations over atom set and linking to compressed sensing. Under stronger conditions, we further establish the uniqueness and exact $ell_1$ recoverability of the sparse representations, and provide guarantees that single-layer sparse autoencoders (SAEs) with threshold activations can reliably identify the atoms. To validate the Atoms Theory, we train threshold-activated SAEs on Gemma2-2B, Gemma2-9B, and Llama3.1-8B, achieving 99.9% sparse reconstruction across layers on average, and more than 99.8% of atoms satisfy the uniqueness condition, compared to 0.5% for neurons and 68.2% for features, showing that atoms more faithfully capture intrinsic representations of LLMs. Scaling experiments further reveal the link between SAEs size and recovery capacity. Overall, this work systematically introduces and validates Atoms Theory of LLMs, providing a theoretical framework for understanding internal representations and a foundation for mechanistic interpretability. Code available at https://github.com/ChenhuiHu/towards_atoms.