Train One Sparse Autoencoder Across Multiple Sparsity Budgets to Preserve Interpretability and Accuracy

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional sparse autoencoders (SAEs) require separate training for each sparsity level, incurring high computational overhead and poor flexibility. To address this, we propose a unified multi-sparsity SAE framework featuring the novel HierarchicalTopK objective, enabling a single model to achieve Pareto-optimal trade-offs between reconstruction fidelity and interpretability across arbitrary sparsity levels. Our method integrates hierarchical Top-K activation selection with multi-objective reconstruction optimization, and quantifies interpretability via feature token alignment and consistency with human annotations. Experiments on Gemma-2 2B demonstrate that our approach maintains strong interpretability even at higher sparsity levels—outperforming multiple dedicated single-sparsity SAEs—while improving variance explained in reconstruction and significantly reducing both training and deployment costs.

Technology Category

Application Category

📝 Abstract
Sparse Autoencoders (SAEs) have proven to be powerful tools for interpreting neural networks by decomposing hidden representations into disentangled, interpretable features via sparsity constraints. However, conventional SAEs are constrained by the fixed sparsity level chosen during training; meeting different sparsity requirements therefore demands separate models and increases the computational footprint during both training and evaluation. We introduce a novel training objective, emph{HierarchicalTopK}, which trains a single SAE to optimise reconstructions across multiple sparsity levels simultaneously. Experiments with Gemma-2 2B demonstrate that our approach achieves Pareto-optimal trade-offs between sparsity and explained variance, outperforming traditional SAEs trained at individual sparsity levels. Further analysis shows that HierarchicalTopK preserves high interpretability scores even at higher sparsity. The proposed objective thus closes an important gap between flexibility and interpretability in SAE design.
Problem

Research questions and friction points this paper is trying to address.

Train one SAE for multiple sparsity levels efficiently
Balance sparsity and accuracy in neural interpretations
Maintain interpretability at higher sparsity in SAEs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Train single SAE for multiple sparsity levels
HierarchicalTopK optimizes reconstructions across sparsity
Preserves interpretability at higher sparsity levels
🔎 Similar Papers
No similar papers found.