🤖 AI Summary
Sparse autoencoders (SAEs) trained on external datasets often incorporate out-of-distribution samples, inducing spurious “fake features” and training instability, thereby compromising faithful interpretation of large language model (LLM) internal representations.
Method: We propose the first external-data-free SAE training paradigm, leveraging synthetic instruction data generated by the target LLM itself—eliminating distribution shift at its source. Our method formally defines and quantifies fake features for the first time, enabling targeted mitigation.
Results: Across seven mainstream LLMs, five achieve lower fake feature rates than Web-data baselines; SAE probe task performance and cross-random-seed stability also improve. These findings establish training data distribution alignment with the target LLM as a critical factor for enhancing SAE interpretability reliability and generalization.
📝 Abstract
Sparse Autoencoders (SAEs) have emerged as a promising solution for decomposing large language model representations into interpretable features. However, Paulo and Belrose (2025) have highlighted instability across different initialization seeds, and Heap et al. (2025) have pointed out that SAEs may not capture model-internal features. These problems likely stem from training SAEs on external datasets - either collected from the Web or generated by another model - which may contain out-of-distribution (OOD) data beyond the model's generalisation capabilities. This can result in hallucinated SAE features, which we term "Fake Features", that misrepresent the model's internal activations. To address these issues, we propose FaithfulSAE, a method that trains SAEs on the model's own synthetic dataset. Using FaithfulSAEs, we demonstrate that training SAEs on less-OOD instruction datasets results in SAEs being more stable across seeds. Notably, FaithfulSAEs outperform SAEs trained on web-based datasets in the SAE probing task and exhibit a lower Fake Feature Ratio in 5 out of 7 models. Overall, our approach eliminates the dependency on external datasets, advancing interpretability by better capturing model-internal features while highlighting the often neglected importance of SAE training datasets.