🤖 AI Summary
Existing sparse autoencoders suffer from poor reconstruction quality and limited scalability due to the non-smoothness of the L1 penalty, and the learned features often exhibit insufficient alignment with human-interpretable semantics. To address these limitations, this work proposes a supervised sparse autoencoder framework that incorporates the unconstrained feature model from neural collapse theory. By jointly optimizing concept embeddings and decoder weights, the method enables semantic compositional feature learning. This approach substantially improves semantic alignment and compositional generalization, achieving high-quality image reconstruction for unseen concept combinations in Stable Diffusion 3.5. Furthermore, it supports semantic-level image editing interventions without requiring any modifications to the input prompts.
📝 Abstract
Sparse auto-encoders (SAEs) have re-emerged as a prominent method for mechanistic interpretability, yet they face two significant challenges: the non-smoothness of the $L_1$ penalty, which hinders reconstruction and scalability, and a lack of alignment between learned features and human semantics. In this paper, we address these limitations by adapting unconstrained feature models-a mathematical framework from neural collapse theory-and by supervising the task. We supervise (decoder-only) SAEs to reconstruct feature vectors by jointly learning sparse concept embeddings and decoder weights. Validated on Stable Diffusion 3.5, our approach demonstrates compositional generalization, successfully reconstructing images with concept combinations unseen during training, and enabling feature-level intervention for semantic image editing without prompt modification.