🤖 AI Summary
Existing sparse autoencoders for mechanistic interpretability of large language models (LLMs) suffer from unreliable causal evaluation and limited intrinsic interpretability, hindering faithful attribution of MLP-layer activations to human-understandable concepts.
Method: This work introduces semi-nonnegative matrix factorization (SNMF) for unsupervised decomposition of MLP activation spaces—yielding features that are both sparse linear combinations of neurons and explicitly interpretable as input-level semantic concepts.
Contribution/Results: Our SNMF-based approach achieves superior causal steering performance over sparse autoencoders and differential mean baselines across multiple LLMs (Llama 3.1, Gemma 2, GPT-2), validated via rigorous causal intervention experiments. It uncovers reusable semantic patterns across neurons and reveals hierarchical conceptual structures within MLP representations—bridging causal fidelity and human interpretability without supervision.
📝 Abstract
A central goal for mechanistic interpretability has been to identify the right units of analysis in large language models (LLMs) that causally explain their outputs. While early work focused on individual neurons, evidence that neurons often encode multiple concepts has motivated a shift toward analyzing directions in activation space. A key question is how to find directions that capture interpretable features in an unsupervised manner. Current methods rely on dictionary learning with sparse autoencoders (SAEs), commonly trained over residual stream activations to learn directions from scratch. However, SAEs often struggle in causal evaluations and lack intrinsic interpretability, as their learning is not explicitly tied to the computations of the model. Here, we tackle these limitations by directly decomposing MLP activations with semi-nonnegative matrix factorization (SNMF), such that the learned features are (a) sparse linear combinations of co-activated neurons, and (b) mapped to their activating inputs, making them directly interpretable. Experiments on Llama 3.1, Gemma 2 and GPT-2 show that SNMF derived features outperform SAEs and a strong supervised baseline (difference-in-means) on causal steering, while aligning with human-interpretable concepts. Further analysis reveals that specific neuron combinations are reused across semantically-related features, exposing a hierarchical structure in the MLP's activation space. Together, these results position SNMF as a simple and effective tool for identifying interpretable features and dissecting concept representations in LLMs.