🤖 AI Summary
This work establishes the first theoretical link between statistical symmetries in language and the geometric structures observed in large language model embeddings, such as circular arrangements of months or one-dimensional manifolds for years. It demonstrates that translation invariance in word co-occurrence probabilities—where the likelihood of co-occurrence depends only on temporal offsets (e.g., between months)—is a key mechanism driving the emergence of these structured representations. Through theoretical analysis, symmetry-based modeling, linear probing experiments, and systematic validation across word embeddings, text embeddings, and large language models, the study shows that this mechanism is highly robust: the induced geometric structures persist even under moderate embedding dimensions or strong perturbations. These findings reveal the profound influence of intrinsic statistical regularities in linguistic data on representation learning.
📝 Abstract
Although learned representations underlie neural networks'success, their fundamental properties remain poorly understood. A striking example is the emergence of simple geometric structures in LLM representations: for example, calendar months organize into a circle, years form a smooth one-dimensional manifold, and cities'latitudes and longitudes can be decoded by a linear probe. We show that the statistics of language exhibit a translation symmetry -- e.g., the co-occurrence probability of two months depends only on the time interval between them -- and we prove that the latter governs the aforementioned geometric structures in high-dimensional word embedding models. Moreover, we find that these structures persist even when the co-occurrence statistics are strongly perturbed (for example, by removing all sentences in which two months appear together) and at moderate embedding dimension. We show that this robustness naturally emerges if the co-occurrence statistics are collectively controlled by an underlying continuous latent variable. We empirically validate this theoretical framework in word embedding models, text embedding models, and large language models.