🤖 AI Summary
Traditional sparse autoencoders (SAEs) trained on broad-domain data suffer from fixed latent dimensionality, leading them to predominantly capture high-frequency, generic patterns. This results in substantial linear “dark matter” reconstruction error and feature entanglement, severely undermining mechanistic interpretability. To address this, we propose a domain-constrained SAE framework tailored to medical text, trained on Gemma-2’s 20th-layer activations using JumpReLU nonlinearity over 195K clinical Q&A samples. By reallocating latent capacity toward low-frequency yet semantically rich clinical concepts, our method effectively suppresses dark matter and mitigates feature splitting/absorption. Experiments show that, compared to general-purpose SAEs, our approach explains 20% more variance, significantly reduces residual reconstruction error, and improves loss recovery. Both automated and human evaluations confirm that the learned latent features exhibit clear, clinically grounded semantic meaning—enhancing interpretability and utility for medical AI analysis.
📝 Abstract
Sparse autoencoders (SAEs) decompose large language model (LLM) activations into latent features that reveal mechanistic structure. Conventional SAEs train on broad data distributions, forcing a fixed latent budget to capture only high-frequency, generic patterns. This often results in significant linear ``dark matter'' in reconstruction error and produces latents that fragment or absorb each other, complicating interpretation. We show that restricting SAE training to a well-defined domain (medical text) reallocates capacity to domain-specific features, improving both reconstruction fidelity and interpretability. Training JumpReLU SAEs on layer-20 activations of Gemma-2 models using 195k clinical QA examples, we find that domain-confined SAEs explain up to 20% more variance, achieve higher loss recovery, and reduce linear residual error compared to broad-domain SAEs. Automated and human evaluations confirm that learned features align with clinically meaningful concepts (e.g., ``taste sensations'' or ``infectious mononucleosis''), rather than frequent but uninformative tokens. These domain-specific SAEs capture relevant linear structure, leaving a smaller, more purely nonlinear residual. We conclude that domain-confinement mitigates key limitations of broad-domain SAEs, enabling more complete and interpretable latent decompositions, and suggesting the field may need to question ``foundation-model'' scaling for general-purpose SAEs.