Sparse but Wrong: Incorrect L0 Leads to Incorrect Features in Sparse Autoencoders

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work reveals that the L0 hyperparameter—denoting the average number of activated features per token—in sparse autoencoders (SAEs) is not freely tunable but must precisely match the intrinsic sparsity distribution of the underlying language model. Mismatched L0 values induce feature entanglement: underspecified L0 forces redundant feature mixing to compensate for reconstruction loss, while overspecified L0 leads to degenerate solutions and spurious activations. To address this, we propose an L0 calibration method grounded in the reconstruction error–sparsity trade-off, validated via sparse probing performance. On synthetic toy models, our method accurately recovers the ground-truth L0; on large language models (LLMs), it consistently improves probe task accuracy, revealing that prevalent SAEs systematically under-specify L0. These findings establish precise L0 tuning as a prerequisite for learning semantically coherent and disentangled representations.

Technology Category

Application Category

📝 Abstract
Sparse Autoencoders (SAEs) extract features from LLM internal activations, meant to correspond to single concepts. A core SAE training hyperparameter is L0: how many features should fire per token on average. Existing work compares SAE algorithms using sparsity--reconstruction tradeoff plots, implying L0 is a free parameter with no single correct value. In this work we study the effect of L0 on BatchTopK SAEs, and show that if L0 is not set precisely, the SAE fails to learn the underlying features of the LLM. If L0 is too low, the SAE will mix correlated features to improve reconstruction. If L0 is too high, the SAE finds degenerate solutions that also mix features. Further, we demonstrate a method to determine the correct L0 value for an SAE on a given training distribution, which finds the true L0 in toy models and coincides with peak sparse probing performance in LLMs. We find that most commonly used SAEs have an L0 that is too low. Our work shows that, to train SAEs with correct features, practitioners must set L0 correctly.
Problem

Research questions and friction points this paper is trying to address.

Studying how incorrect L0 hyperparameter affects feature learning in sparse autoencoders
Demonstrating that improper L0 leads to mixed or degenerate feature representations
Developing a method to determine correct L0 value for optimal feature extraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies correct L0 hyperparameter setting method
Reveals L0 imbalance causes feature mixing problems
Uses sparse probing for optimal L0 determination
🔎 Similar Papers
No similar papers found.