🤖 AI Summary
Existing sparse autoencoders (SAEs) optimize sparsity only within individual layers, neglecting inter-layer feature connectivity sparsity—leading to excessive upstream feature broadcasting and diminished neural circuit interpretability. To address this, we introduce SCALAR, the first benchmark quantitatively evaluating cross-layer feature interaction sparsity in SAEs. We further propose Staircase SAE, a novel architecture that enforces weight-sharing constraints to suppress upstream feature replication and systematically model sparse cross-layer connections. Evaluated on GPT-2 Small against TopK SAE and Jacobian SAE, Staircase SAE improves interaction sparsity by 59.67% in feed-forward layers and 63.15% in Transformer blocks, while preserving feature interpretability. This work establishes both a new quantitative benchmark and a principled architectural framework for constructing compact, interpretable neural circuits in language models.
📝 Abstract
Mechanistic interpretability aims to decompose neural networks into interpretable features and map their connecting circuits. The standard approach trains sparse autoencoders (SAEs) on each layer's activations. However, SAEs trained in isolation don't encourage sparse cross-layer connections, inflating extracted circuits where upstream features needlessly affect multiple downstream features. Current evaluations focus on individual SAE performance, leaving interaction sparsity unexamined. We introduce SCALAR (Sparse Connectivity Assessment of Latent Activation Relationships), a benchmark measuring interaction sparsity between SAE features. We also propose"Staircase SAEs", using weight-sharing to limit upstream feature duplication across downstream features. Using SCALAR, we compare TopK SAEs, Jacobian SAEs (JSAEs), and Staircase SAEs. Staircase SAEs improve relative sparsity over TopK SAEs by $59.67% pm 1.83%$ (feedforward) and $63.15% pm 1.35%$ (transformer blocks). JSAEs provide $8.54% pm 0.38%$ improvement over TopK for feedforward layers but cannot train effectively across transformer blocks, unlike Staircase and TopK SAEs which work anywhere in the residual stream. We validate on a $216$K-parameter toy model and GPT-$2$ Small ($124$M), where Staircase SAEs maintain interaction sparsity improvements while preserving feature interpretability. Our work highlights the importance of interaction sparsity in SAEs through benchmarking and comparing promising architectures.