π€ AI Summary
This study investigates how large language models (LLMs) internally represent and distinguish named entity mentions, addressing the many-to-many mapping between entities and their surface mentions.
Method: We propose the first clustering-inspired evaluation framework for entity recognition, integrating representation clustering, low-dimensional linear subspace detection, and knowledge-structure isomorphism modeling to quantify both intra-entity mention cohesion and inter-entity mention separation.
Contribution/Results: We find that entity information is compactly encoded in low-dimensional linear subspaces as early as the initial Transformer layers, and that these subspaces exhibit structural isomorphism with real-world entity knowledge. Evaluated on five mainstream LLMs, our framework achieves entity recognition precision/recall of 0.66β0.90. Results confirm that strong entity discrimination emerges early in the model hierarchy, and that the quality of entity representations significantly influences downstream token prediction performance.
π Abstract
We analyze the extent to which internal representations of language models (LMs) identify and distinguish mentions of named entities, focusing on the many-to-many correspondence between entities and their mentions. We first formulate two problems of entity mentions -- ambiguity and variability -- and propose a framework analogous to clustering quality metrics. Specifically, we quantify through cluster analysis of LM internal representations the extent to which mentions of the same entity cluster together and mentions of different entities remain separated. Our experiments examine five Transformer-based autoregressive models, showing that they effectively identify and distinguish entities with metrics analogous to precision and recall ranging from 0.66 to 0.9. Further analysis reveals that entity-related information is compactly represented in a low-dimensional linear subspace at early LM layers. Additionally, we clarify how the characteristics of entity representations influence word prediction performance. These findings are interpreted through the lens of isomorphism between LM representations and entity-centric knowledge structures in the real world, providing insights into how LMs internally organize and use entity information.