From Directions to Regions: Decomposing Activations in Language Models via Local Geometry

📅 2026-02-02
📈 Citations: 0
âœĻ Influential: 0
📄 PDF
ðŸĪ– AI Summary
Traditional activation decomposition methods rely on global linear directions, which struggle to capture the complex nonlinear or multidimensional conceptual structures in language models. This work proposes modeling the activation space as a mixture of Gaussian regions, leveraging Mixture of Factor Analyzers (MFA) to capture local covariance structures. By representing concepts through region centers and local subspaces rather than a single global direction, the approach introduces local geometric modeling into activation decomposition for the first time. The method is fully unsupervised and scalable. Experiments on Llama-3.1-8B and Gemma-2-2B demonstrate that MFA significantly outperforms unsupervised baselines in concept localization and intervention tasks, matching or even surpassing the effectiveness of supervised methods and sparse autoencoders.

Technology Category

Application Category

📝 Abstract
Activation decomposition methods in language models are tightly coupled to geometric assumptions on how concepts are realized in activation space. Existing approaches search for individual global directions, implicitly assuming linear separability, which overlooks concepts with nonlinear or multi-dimensional structure. In this work, we leverage Mixture of Factor Analyzers (MFA) as a scalable, unsupervised alternative that models the activation space as a collection of Gaussian regions with their local covariance structure. MFA decomposes activations into two compositional geometric objects: the region's centroid in activation space, and the local variation from the centroid. We train large-scale MFAs for Llama-3.1-8B and Gemma-2-2B, and show they capture complex, nonlinear structures in activation space. Moreover, evaluations on localization and steering benchmarks show that MFA outperforms unsupervised baselines, is competitive with supervised localization methods, and often achieves stronger steering performance than sparse autoencoders. Together, our findings position local geometry, expressed through subspaces, as a promising unit of analysis for scalable concept discovery and model control, accounting for complex structures that isolated directions fail to capture.
Problem

Research questions and friction points this paper is trying to address.

activation decomposition
nonlinear structure
language models
geometric assumptions
concept representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Factor Analyzers
local geometry
activation decomposition
nonlinear concept representation
model steering
🔎 Similar Papers
No similar papers found.
O
Or Shafran
Blavatnik School of Computer Science and AI, Tel Aviv University, Israel
S
Shaked Ronen
Blavatnik School of Computer Science and AI, Tel Aviv University, Israel
O
Omri Fahn
Blavatnik School of Computer Science and AI, Tel Aviv University, Israel
Shauli Ravfogel
Shauli Ravfogel
Faculty Fellow, NYU
NLPMachine Learning
Atticus Geiger
Atticus Geiger
Pr(Ai)ÂēR Group
Artificial IntelligenceNatural LanguageMechanistic InterpretabilityCausality
Mor Geva
Mor Geva
Tel Aviv University, Google Research
Natural Language Processing