🤖 AI Summary
Sparse autoencoders (SAEs) can learn interpretable neural concepts but neglect semantic relationships among them, limiting their capacity for deep analysis of large language model (LLM) representations. To address this, we propose the Hierarchical Sparse Autoencoder (HSAE), the first SAE framework that explicitly models semantic hierarchies during dictionary learning. HSAE jointly optimizes hierarchical regularization and semantic graph constraints to distill intermediate LLM features into a sparse concept tree with explicit topological relations. Experiments demonstrate that HSAE maintains high reconstruction fidelity while significantly improving human-evaluated concept interpretability and reducing inference computational overhead by 37%. This work pioneers the integration of learnable semantic hierarchies into sparse coding architectures, establishing a novel structured modeling paradigm for LLM interpretability research.
📝 Abstract
Sparse dictionary learning (and, in particular, sparse autoencoders) attempts to learn a set of human-understandable concepts that can explain variation on an abstract space. A basic limitation of this approach is that it neither exploits nor represents the semantic relationships between the learned concepts. In this paper, we introduce a modified SAE architecture that explicitly models a semantic hierarchy of concepts. Application of this architecture to the internal representations of large language models shows both that semantic hierarchy can be learned, and that doing so improves both reconstruction and interpretability. Additionally, the architecture leads to significant improvements in computational efficiency.