The Geometry of Tokens in Internal Representations of Large Language Models

📅 2025-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how the geometric structure of token embeddings in large language models (LLMs) influences next-token prediction performance. Method: We propose a systematic empirical measurement framework that integrates geometric metrics—including intrinsic dimensionality, neighborhood overlap, and cosine similarity—and quantitatively correlates them with per-layer cross-entropy loss for the first time. Contribution/Results: We find that high prediction loss consistently corresponds to higher-dimensional embedding spaces, revealing an implicit link between geometric dimensionality and model uncertainty; this correlation vanishes under token shuffling, confirming the decisive role of syntactic and semantic structure in shaping embedding geometry. Furthermore, layer-wise geometric evolution analysis characterizes a dynamic transition from discretized to manifold-like representations across layers. These findings establish embedding geometry as an interpretable proxy for predictive performance, offering a novel perspective for model diagnosis and optimization.

Technology Category

Application Category

📝 Abstract
We investigate the relationship between the geometry of token embeddings and their role in the next token prediction within transformer models. An important aspect of this connection uses the notion of empirical measure, which encodes the distribution of token point clouds across transformer layers and drives the evolution of token representations in the mean-field interacting picture. We use metrics such as intrinsic dimension, neighborhood overlap, and cosine similarity to observationally probe these empirical measures across layers. To validate our approach, we compare these metrics to a dataset where the tokens are shuffled, which disrupts the syntactic and semantic structure. Our findings reveal a correlation between the geometric properties of token embeddings and the cross-entropy loss of next token predictions, implying that prompts with higher loss values have tokens represented in higher-dimensional spaces.
Problem

Research questions and friction points this paper is trying to address.

Permutation Effect
Token Prediction
Representation Evolution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer Models
Token Representation Evolution
Prediction Error Correlation
K
Karthik Viswanathan
University of Amsterdam, Amsterdam, the Netherlands; Area Science Park, Trieste, Italy
Y
Yuri Gardinazzi
Area Science Park, Trieste, Italy; University of Trieste, Trieste, Italy
G
Giada Panerai
Area Science Park, Trieste, Italy
Alberto Cazzaniga
Alberto Cazzaniga
Researcher, AREA Science Park
M
Matteo Biagetti
Area Science Park, Trieste, Italy