Teach Old SAEs New Domain Tricks with Boosting

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sparse autoencoders (SAEs) effectively interpret large language model (LLM) internal representations but struggle to capture domain-specific features underrepresented in general pretraining corpora. To address this, we propose a residual SAE augmentation framework: a secondary SAE is trained exclusively to reconstruct the residual errors of a pretrained primary SAE on domain-specific text—without modifying or retraining the primary model—thereby enhancing domain sensitivity. Our method employs staged training and output-superposition inference, enabling modular, composable injection of domain knowledge. Experiments across multiple specialized domains demonstrate significant reductions in LLM cross-entropy loss and substantial improvements in explained variance of neuron activations, while preserving performance on general-purpose benchmarks. These results validate the framework’s effectiveness, compatibility with existing SAEs, and scalability to diverse domains.

Technology Category

Application Category

📝 Abstract
Sparse Autoencoders have emerged as powerful tools for interpreting the internal representations of Large Language Models, yet they often fail to capture domain-specific features not prevalent in their training corpora. This paper introduces a residual learning approach that addresses this feature blindness without requiring complete retraining. We propose training a secondary SAE specifically to model the reconstruction error of a pretrained SAE on domain-specific texts, effectively capturing features missed by the primary model. By summing the outputs of both models during inference, we demonstrate significant improvements in both LLM cross-entropy and explained variance metrics across multiple specialized domains. Our experiments show that this method efficiently incorporates new domain knowledge into existing SAEs while maintaining their performance on general tasks. This approach enables researchers to selectively enhance SAE interpretability for specific domains of interest, opening new possibilities for targeted mechanistic interpretability of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Addressing feature blindness in Sparse Autoencoders for domain-specific texts
Improving LLM cross-entropy and explained variance in specialized domains
Enhancing SAE interpretability for targeted mechanistic analysis of LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Residual learning for domain-specific feature capture
Secondary SAE models primary SAE reconstruction error
Sum outputs for improved cross-entropy and variance
🔎 Similar Papers
No similar papers found.