FaithfulSAE: Towards Capturing Faithful Features with Sparse Autoencoders without External Dataset Dependencies

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Sparse autoencoders (SAEs) trained on external datasets often incorporate out-of-distribution samples, inducing spurious “fake features” and training instability, thereby compromising faithful interpretation of large language model (LLM) internal representations. Method: We propose the first external-data-free SAE training paradigm, leveraging synthetic instruction data generated by the target LLM itself—eliminating distribution shift at its source. Our method formally defines and quantifies fake features for the first time, enabling targeted mitigation. Results: Across seven mainstream LLMs, five achieve lower fake feature rates than Web-data baselines; SAE probe task performance and cross-random-seed stability also improve. These findings establish training data distribution alignment with the target LLM as a critical factor for enhancing SAE interpretability reliability and generalization.

Technology Category

Application Category

📝 Abstract
Sparse Autoencoders (SAEs) have emerged as a promising solution for decomposing large language model representations into interpretable features. However, Paulo and Belrose (2025) have highlighted instability across different initialization seeds, and Heap et al. (2025) have pointed out that SAEs may not capture model-internal features. These problems likely stem from training SAEs on external datasets - either collected from the Web or generated by another model - which may contain out-of-distribution (OOD) data beyond the model's generalisation capabilities. This can result in hallucinated SAE features, which we term "Fake Features", that misrepresent the model's internal activations. To address these issues, we propose FaithfulSAE, a method that trains SAEs on the model's own synthetic dataset. Using FaithfulSAEs, we demonstrate that training SAEs on less-OOD instruction datasets results in SAEs being more stable across seeds. Notably, FaithfulSAEs outperform SAEs trained on web-based datasets in the SAE probing task and exhibit a lower Fake Feature Ratio in 5 out of 7 models. Overall, our approach eliminates the dependency on external datasets, advancing interpretability by better capturing model-internal features while highlighting the often neglected importance of SAE training datasets.
Problem

Research questions and friction points this paper is trying to address.

SAEs instability across different initialization seeds
SAEs may not capture model-internal features
Training SAEs on external datasets causes hallucinated features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trains SAEs on model's own synthetic dataset
Reduces dependency on external datasets
Improves stability and feature faithfulness
🔎 Similar Papers
No similar papers found.
Seonglae Cho
Seonglae Cho
University College London
Mechanistic InterpretabilityLanguage ModelingAI Alignment
H
Harryn Oh
University College London
D
Donghyun Lee
University College London
L
Luis Eduardo Rodrigues Vieira
University College London
Andrew Bermingham
Andrew Bermingham
MSc Student, University College London
Ziad El Sayed
Ziad El Sayed
MSc Machine Learning, UCL