🤖 AI Summary
Sparse autoencoders (SAEs) suffer from insufficient encoding accuracy in complex tasks. This work is the first to theoretically establish, via compressed sensing theory, an inherent and unbridgeable “amortization gap” in SAE encoders—revealing a fundamental limitation of linear–nonlinear joint encoding for sparse feature inference.
Method: We propose a decoupled encoding–decoding framework that separates sparse inference from encoder learning. Specifically, we design a highly expressive learnable encoder and integrate a theoretically optimal sparse decoding algorithm with provable recovery guarantees.
Results: On multiple benchmarks, our approach improves sparse code accuracy by 15–30% while maintaining near-identical computational overhead. In large language model (LLM) activation analysis, it significantly enhances both feature interpretability and localization precision. This work establishes a new paradigm bridging theoretical foundations and practical performance for SAEs.
📝 Abstract
A recent line of work has shown promise in using sparse autoencoders (SAEs) to uncover interpretable features in neural network representations. However, the simple linear-nonlinear encoding mechanism in SAEs limits their ability to perform accurate sparse inference. Using compressed sensing theory, we prove that an SAE encoder is inherently insufficient for accurate sparse inference, even in solvable cases. We then decouple encoding and decoding processes to empirically explore conditions where more sophisticated sparse inference methods outperform traditional SAE encoders. Our results reveal substantial performance gains with minimal compute increases in correct inference of sparse codes. We demonstrate this generalises to SAEs applied to large language models, where more expressive encoders achieve greater interpretability. This work opens new avenues for understanding neural network representations and analysing large language model activations.