Towards Atoms of Large Language Models

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The fundamental units of internal representation in large language models (LLMs) remain undefined; individual neurons exhibit semantic polysemy, and conventional feature reconstruction is unreliable and unstable. Method: We propose the “atom” theory—the first formal definition of stable, interpretable elementary representation units in LLMs. Leveraging compressed sensing theory, we prove that atoms satisfy the restricted isometry property (RIP) and ℓ₁-recoverability, and establish theoretical guarantees for sparse autoencoders to reliably identify atoms under thresholded activation. Our method employs a single-layer sparse autoencoder augmented with atom inner-product correction to mitigate representation bias, ensuring both stability and uniqueness of sparse codes. Results: Evaluated on Gemma and Llama series models, our approach achieves an average sparse reconstruction accuracy of 99.9%, with over 99.8% of atoms satisfying uniqueness—substantially outperforming neuron- and traditional feature-based representations.

Technology Category

Application Category

📝 Abstract
The fundamental units of internal representations in large language models (LLMs) remain undefined, limiting further understanding of their mechanisms. Neurons or features are often regarded as such units, yet neurons suffer from polysemy, while features face concerns of unreliable reconstruction and instability. To address this issue, we propose the Atoms Theory, which defines such units as atoms. We introduce the atomic inner product (AIP) to correct representation shifting, formally define atoms, and prove the conditions that atoms satisfy the Restricted Isometry Property (RIP), ensuring stable sparse representations over atom set and linking to compressed sensing. Under stronger conditions, we further establish the uniqueness and exact $ell_1$ recoverability of the sparse representations, and provide guarantees that single-layer sparse autoencoders (SAEs) with threshold activations can reliably identify the atoms. To validate the Atoms Theory, we train threshold-activated SAEs on Gemma2-2B, Gemma2-9B, and Llama3.1-8B, achieving 99.9% sparse reconstruction across layers on average, and more than 99.8% of atoms satisfy the uniqueness condition, compared to 0.5% for neurons and 68.2% for features, showing that atoms more faithfully capture intrinsic representations of LLMs. Scaling experiments further reveal the link between SAEs size and recovery capacity. Overall, this work systematically introduces and validates Atoms Theory of LLMs, providing a theoretical framework for understanding internal representations and a foundation for mechanistic interpretability. Code available at https://github.com/ChenhuiHu/towards_atoms.
Problem

Research questions and friction points this paper is trying to address.

Defining fundamental representation units in LLMs beyond neurons and features
Addressing polysemy in neurons and reconstruction instability in features
Establishing theoretical framework for stable sparse representations in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Atoms Theory to define fundamental representation units
Introduces atomic inner product to correct representation shifting
Validates theory with threshold-activated sparse autoencoders on LLMs
🔎 Similar Papers
No similar papers found.
Chenhui Hu
Chenhui Hu
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
P
Pengfei Cao
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
Yubo Chen
Yubo Chen
Institute of Automation, Chinese Academy of Sciences
Natural Language ProcessingInformation ExtractionEvent ExtractionLarge Language Model
K
Kang Liu
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
J
Jun Zhao
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China