From Atoms to Trees: Building a Structured Feature Forest with Hierarchical Sparse Autoencoders

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing sparse autoencoders (SAEs) struggle to capture the intrinsic multi-granular semantic hierarchies within language models. To address this limitation, this work proposes the Hierarchical Sparse Autoencoder (HSAE), which jointly trains multiple SAEs while explicitly enforcing structural constraints and incorporating stochastic perturbations between parent-child feature representations. This approach is the first to recover semantic hierarchies within the SAE framework. By aligning features across multiple layers of large language models, HSAE successfully constructs interpretable feature forests that reveal the hierarchical organization of linguistic representations. The method maintains reconstruction fidelity and interpretability comparable to standard SAEs while effectively uncovering the layered semantic structure inherent in language model activations.

Technology Category

Application Category

📝 Abstract
Sparse autoencoders (SAEs) have proven effective for extracting monosemantic features from large language models (LLMs), yet these features are typically identified in isolation. However, broad evidence suggests that LLMs capture the intrinsic structure of natural language, where the phenomenon of"feature splitting"in particular indicates that such structure is hierarchical. To capture this, we propose the Hierarchical Sparse Autoencoder (HSAE), which jointly learns a series of SAEs and the parent-child relationships between their features. HSAE strengthens the alignment between parent and child features through two novel mechanisms: a structural constraint loss and a random feature perturbation mechanism. Extensive experiments across various LLMs and layers demonstrate that HSAE consistently recovers semantically meaningful hierarchies, supported by both qualitative case studies and rigorous quantitative metrics. At the same time, HSAE preserves the reconstruction fidelity and interpretability of standard SAEs across different dictionary sizes. Our work provides a powerful, scalable tool for discovering and analyzing the multi-scale conceptual structures embedded in LLM representations.
Problem

Research questions and friction points this paper is trying to address.

hierarchical structure
sparse autoencoders
feature hierarchy
large language models
monosemantic features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Sparse Autoencoder
feature hierarchy
structural constraint loss
random feature perturbation
monosemantic features
🔎 Similar Papers
No similar papers found.