🤖 AI Summary
Large language models (LLMs) struggle to represent unipolar categorical concepts (e.g., “is an animal”) due to the absence of natural antonyms and inherent nonlinearity in their embeddings.
Method: We propose the Extended Linear Representation Hypothesis—modeling unipolar categories as direction vectors and their semantic scope as convex polyhedra—and establish the first rigorous mathematical mapping between conceptual hierarchy depth and representation geometry (i.e., decreasing inter-vector angles and polyhedral containment). Using WordNet, we construct a hierarchical concept set of 900+ categories and estimate direction vectors in Gemma and LLaMA-3, followed by joint convex-geometric and semantic analysis.
Results: Empirically, deeper-category vectors exhibit significantly smaller pairwise angles; moreover, parent-category polyhedra approximately contain those of their children. These findings confirm that LLMs internally organize semantic categories via interpretable geometric structures—offering a novel paradigm for interpretable and hierarchically grounded semantic representation in LLMs.
📝 Abstract
The linear representation hypothesis is the informal idea that semantic concepts are encoded as linear directions in the representation spaces of large language models (LLMs). Previous work has shown how to make this notion precise for representing binary concepts that have natural contrasts (e.g., {male, female}) as directions in representation space. However, many natural concepts do not have natural contrasts (e.g., whether the output is about an animal). In this work, we show how to extend the formalization of the linear representation hypothesis to represent features (e.g., is_animal) as vectors. This allows us to immediately formalize the representation of categorical concepts as polytopes in the representation space. Further, we use the formalization to prove a relationship between the hierarchical structure of concepts and the geometry of their representations. We validate these theoretical results on the Gemma and LLaMA-3 large language models, estimating representations for 900+ hierarchically related concepts using data from WordNet.