🤖 AI Summary
This work investigates the poor out-of-distribution compositional generalization performance of sparse autoencoders (SAEs) and challenges the prevailing view that attributes this failure to approximation errors from amortized inference. Through controlled experiments, the authors demonstrate that the core issue lies in the inadequacy of the learned dictionary: even when employing exact per-sample sparse inference methods such as FISTA, performance severely degrades if the dictionary atoms deviate from true underlying concepts. The study is the first to clearly isolate dictionary learning—not amortization error—as the primary cause of SAEs’ generalization failures. Moreover, it shows that using high-quality dictionaries, such as oracle dictionaries aligned with ground-truth factors, enables perfect compositional generalization across all tested scales. These findings underscore that scalable, high-fidelity dictionary learning constitutes the central challenge for achieving robust compositional generalization with SAEs.
📝 Abstract
The linear representation hypothesis states that neural network activations encode high-level concepts as linear mixtures. However, under superposition, this encoding is a projection from a higher-dimensional concept space into a lower-dimensional activation space, and a linear decision boundary in the concept space need not remain linear after projection. In this setting, classical sparse coding methods with per-sample iterative inference leverage compressed sensing guarantees to recover latent factors. Sparse autoencoders (SAEs), on the other hand, amortise sparse inference into a fixed encoder, introducing a systematic gap. We show this amortisation gap persists across training set sizes, latent dimensions, and sparsity levels, causing SAEs to fail under out-of-distribution (OOD) compositional shifts. Through controlled experiments that decompose the failure, we identify dictionary learning -- not the inference procedure -- as the binding constraint: SAE-learned dictionaries point in substantially wrong directions, and replacing the encoder with per-sample FISTA on the same dictionary does not close the gap. An oracle baseline proves the problem is solvable with a good dictionary at all scales tested. Our results reframe the SAE failure as a dictionary learning challenge, not an amortisation problem, and point to scalable dictionary learning as the key open problem for sparse inference under superposition.