On the Representations of Entities in Auto-regressive Large Language Models

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) internally represent multi-token named entities and their semantic relations within hidden layers. To address this, we propose the “Entity Mention Reconstruction” framework, which integrates task vectors with Entity Lens—a novel probing technique—enabling direct generation of complete multi-word entities (e.g., “New York City”) from intermediate-layer hidden states, bypassing output-layer logits entirely. Leveraging task vector analysis, extended logit-lens, and hierarchical decoding, we characterize the dynamic evolution of entity representations across layers. Experimental results demonstrate that LLMs possess dedicated, layer-localized mechanisms for encoding entities: they accurately reconstruct out-of-vocabulary multi-token entities unseen during training and implicitly capture inter-entity semantic relationships. Our findings provide new insights into the structural organization of factual knowledge in LLMs. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Named entities are fundamental building blocks of knowledge in text, grounding factual information and structuring relationships within language. Despite their importance, it remains unclear how Large Language Models (LLMs) internally represent entities. Prior research has primarily examined explicit relationships, but little is known about entity representations themselves. We introduce entity mention reconstruction as a novel framework for studying how LLMs encode and manipulate entities. We investigate whether entity mentions can be generated from internal representations, how multi-token entities are encoded beyond last-token embeddings, and whether these representations capture relational knowledge. Our proposed method, leveraging _task vectors_, allows to consistently generate multi-token mentions from various entity representations derived from the LLMs hidden states. We thus introduce the _Entity Lens_, extending the _logit-lens_ to predict multi-token mentions. Our results bring new evidence that LLMs develop entity-specific mechanisms to represent and manipulate any multi-token entities, including those unseen during training. Our code is avalable at https://github.com/VictorMorand/EntityRepresentations .
Problem

Research questions and friction points this paper is trying to address.

Investigating how LLMs internally represent named entities
Exploring multi-token entity encoding beyond last-token embeddings
Examining whether entity representations capture relational knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entity mention reconstruction for studying representations
Task vectors generate multi-token entity mentions
Entity Lens extends logit-lens for predictions
🔎 Similar Papers
No similar papers found.