On Entity Identification in Language Models

πŸ“… 2025-06-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates how large language models (LLMs) internally represent and distinguish named entity mentions, addressing the many-to-many mapping between entities and their surface mentions. Method: We propose the first clustering-inspired evaluation framework for entity recognition, integrating representation clustering, low-dimensional linear subspace detection, and knowledge-structure isomorphism modeling to quantify both intra-entity mention cohesion and inter-entity mention separation. Contribution/Results: We find that entity information is compactly encoded in low-dimensional linear subspaces as early as the initial Transformer layers, and that these subspaces exhibit structural isomorphism with real-world entity knowledge. Evaluated on five mainstream LLMs, our framework achieves entity recognition precision/recall of 0.66–0.90. Results confirm that strong entity discrimination emerges early in the model hierarchy, and that the quality of entity representations significantly influences downstream token prediction performance.

Technology Category

Application Category

πŸ“ Abstract
We analyze the extent to which internal representations of language models (LMs) identify and distinguish mentions of named entities, focusing on the many-to-many correspondence between entities and their mentions. We first formulate two problems of entity mentions -- ambiguity and variability -- and propose a framework analogous to clustering quality metrics. Specifically, we quantify through cluster analysis of LM internal representations the extent to which mentions of the same entity cluster together and mentions of different entities remain separated. Our experiments examine five Transformer-based autoregressive models, showing that they effectively identify and distinguish entities with metrics analogous to precision and recall ranging from 0.66 to 0.9. Further analysis reveals that entity-related information is compactly represented in a low-dimensional linear subspace at early LM layers. Additionally, we clarify how the characteristics of entity representations influence word prediction performance. These findings are interpreted through the lens of isomorphism between LM representations and entity-centric knowledge structures in the real world, providing insights into how LMs internally organize and use entity information.
Problem

Research questions and friction points this paper is trying to address.

Analyze LM internal representations for entity mention identification
Measure clustering quality of same-entity vs. different-entity mentions
Examine entity representation impact on LM prediction performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cluster analysis of LM internal representations
Low-dimensional linear subspace for entities
Isomorphism between LM and entity structures
πŸ”Ž Similar Papers
No similar papers found.