🤖 AI Summary
Existing sparse dictionary learning (SDL) methods—including sparse autoencoders, transcoders, and cross-encoders—lack a unified theoretical foundation; only a few weight-tied models have received preliminary analysis. Method: We propose the first optimization framework unifying multiple SDL architectures, mathematically characterizing the coupling between loss landscapes and sparsity constraints. Through formal modeling and controlled experiments, we systematically analyze key phenomena including feature absorption, neuron death, and neuron resampling. Contribution/Results: The framework reveals how overlapping representations are disentangled into interpretable concepts via sparsity, providing the first common theoretical grounding for diverse SDL approaches. Empirical validation demonstrates strong alignment between theoretical predictions and observed behavior—significantly bridging a longstanding theoretical gap in SDL.
📝 Abstract
As AI models achieve remarkable capabilities across diverse domains, understanding what representations they learn and how they process information has become increasingly important for both scientific progress and trustworthy deployment. Recent works in mechanistic interpretability have shown that neural networks represent meaningful concepts as directions in their representation spaces and often encode many concepts in superposition. Various sparse dictionary learning (SDL) methods, including sparse autoencoders, transcoders, and crosscoders, address this by training auxiliary models with sparsity constraints to disentangle these superposed concepts into interpretable features. These methods have demonstrated remarkable empirical success but have limited theoretical understanding. Existing theoretical work is limited to sparse autoencoders with tied-weight constraints, leaving the broader family of SDL methods without formal grounding. In this work, we develop the first unified theoretical framework considering SDL as one unified optimization problem. We demonstrate how diverse methods instantiate the theoretical framwork and provide rigorous analysis on the optimization landscape. We provide the first theoretical explanations for some empirically observed phenomena, including feature absorption, dead neurons, and the neuron resampling technique. We further design controlled experiments to validate our theoretical results.