Linear Representations of Hierarchical Concepts in Language Models

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how language models encode hierarchical conceptual relationships—such as “Japan ⊂ East Asia ⊂ Asia”—within their internal representations. By training domain-specific linear transformations across varying semantic domains and hierarchy depths, and integrating cross-layer representation analysis, multi-token entity modeling, and cross-domain transfer evaluation, the work systematically demonstrates for the first time that such hierarchical structures are embedded in low-dimensional, domain-specific subspaces in a highly interpretable linear form. Experimental results show that the model can linearly recover hierarchical relations within a given domain, and that these subspaces exhibit strong structural similarity and transferability across different domains.
📝 Abstract
We investigate how and to what extent hierarchical relations (e.g., Japan $\subset$ Eastern Asia $\subset$ Asia) are encoded in the internal representations of language models. Building on Linear Relational Concepts, we train linear transformations specific to each hierarchical depth and semantic domain, and characterize representational differences associated with hierarchical relations by comparing these transformations. Going beyond prior work on the representational geometry of hierarchies in LMs, our analysis covers multi-token entities and cross-layer representations. Across multiple domains we learn such transformations and evaluate in-domain generalization to unseen data and cross-domain transfer. Experiments show that, within a domain, hierarchical relations can be linearly recovered from model representations. We then analyze how hierarchical information is encoded in representation space. We find that it is encoded in a relatively low-dimensional subspace and that this subspace tends to be domain-specific. Our main result is that hierarchy representation is highly similar across these domain-specific subspaces. Overall, we find that all models considered in our experiments encode concept hierarchies in the form of highly interpretable linear representations.
Problem

Research questions and friction points this paper is trying to address.

hierarchical concepts
language models
linear representations
representational geometry
concept hierarchies
Innovation

Methods, ideas, or system contributions that make the work stand out.

linear representations
hierarchical concepts
language models
representational geometry
domain-specific subspaces
🔎 Similar Papers
No similar papers found.