Beyond Activation Patterns: A Weight-Based Out-of-Context Explanation of Sparse Autoencoder Features

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches to interpreting features in sparse autoencoders primarily rely on activation patterns, often overlooking the functional roles these features play during forward computation. This work proposes a weight-interaction-based interpretability framework that, for the first time, reveals the functional semantics of features without requiring activation data. Through functional effect metrics and structural probing experiments on Gemma-2 and Llama-3.1, the study finds that approximately one-quarter of the features can directly predict output tokens, that features exhibit deep dependency structures within attention mechanisms, and that semantic and non-semantic features are distributed distinctly across attention circuits. This approach offers a novel perspective for understanding the internal mechanisms of large language models.

Technology Category

Application Category

📝 Abstract
Sparse autoencoders (SAEs) have emerged as a powerful technique for decomposing language model representations into interpretable features. Current interpretation methods infer feature semantics from activation patterns, but overlook that features are trained to reconstruct activations that serve computational roles in the forward pass. We introduce a novel weight-based interpretation framework that measures functional effects through direct weight interactions, requiring no activation data. Through three experiments on Gemma-2 and Llama-3.1 models, we demonstrate that (1) 1/4 of features directly predict output tokens, (2) features actively participate in attention mechanisms with depth-dependent structure, and (3) semantic and non-semantic feature populations exhibit distinct distribution profiles in attention circuits. Our analysis provides the missing out-of-context half of SAE feature interpretability.
Problem

Research questions and friction points this paper is trying to address.

sparse autoencoder
feature interpretability
weight-based explanation
out-of-context
language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

weight-based interpretation
sparse autoencoder
out-of-context explanation
feature functionality
attention mechanism
🔎 Similar Papers
No similar papers found.