Dense SAE Latents Are Features, Not Bugs

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether high-frequency-activated “dense latent variables” in sparse autoencoders (SAEs) constitute training artifacts or intrinsic functional representations. Through residual stream geometric analysis, latent variable ablation and subspace suppression experiments, hierarchical attribution, and semantic annotation, we systematically demonstrate their non-noise nature: dense latents reconstruct critical residual directions in complementary pairs and exhibit clear functional semantics. We propose six interpretable functional categories and uncover their cross-layer evolutionary pattern—progressing from low-level structural representation, through semantic abstraction, to top-layer output-directed control. Crucially, suppressing dense latents impairs regeneration of novel but semantically similar features, confirming their functional necessity and computational indispensability. This study establishes, for the first time, that dense latent variables are inherent, irreplaceable functional components within the residual space of language models.

Technology Category

Application Category

📝 Abstract
Sparse autoencoders (SAEs) are designed to extract interpretable features from language models by enforcing a sparsity constraint. Ideally, training an SAE would yield latents that are both sparse and semantically meaningful. However, many SAE latents activate frequently (i.e., are emph{dense}), raising concerns that they may be undesirable artifacts of the training procedure. In this work, we systematically investigate the geometry, function, and origin of dense latents and show that they are not only persistent but often reflect meaningful model representations. We first demonstrate that dense latents tend to form antipodal pairs that reconstruct specific directions in the residual stream, and that ablating their subspace suppresses the emergence of new dense features in retrained SAEs -- suggesting that high density features are an intrinsic property of the residual space. We then introduce a taxonomy of dense latents, identifying classes tied to position tracking, context binding, entropy regulation, letter-specific output signals, part-of-speech, and principal component reconstruction. Finally, we analyze how these features evolve across layers, revealing a shift from structural features in early layers, to semantic features in mid layers, and finally to output-oriented signals in the last layers of the model. Our findings indicate that dense latents serve functional roles in language model computation and should not be dismissed as training noise.
Problem

Research questions and friction points this paper is trying to address.

Investigates dense latents in sparse autoencoders functionality
Analyzes dense latents' role in language model computation
Classifies dense latents types and their layer evolution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dense latents form antipodal pairs reconstructing directions
Taxonomy identifies functional classes of dense latents
Dense latents evolve from structural to semantic features
🔎 Similar Papers
No similar papers found.