🤖 AI Summary
This study investigates how language models internalize the geometric structure underlying data during predictive tasks. By training a decoder-only Transformer to predict sequences generated by constrained random walks on a two-dimensional lattice, the authors establish, for the first time in a controlled setting, a precise correspondence between the model’s internal representations and the geometric structure of the data-generating process. Through analytically derived sufficient statistics and representational alignment analyses, they demonstrate that hidden states across model layers align closely with a theoretically optimal prediction vector composed of position, target location, and remaining steps—often exhibiting low-dimensional structure. These findings suggest that predictive geometry serves as a fundamental mechanism driving the formation of world-model representations in neural networks.
📝 Abstract
Next-token predictors often appear to develop internal representations of the latent world and its rules. The probabilistic nature of these models suggests a deep connection between the structure of the world and the geometry of probability distributions. In order to understand this link more precisely, we use a minimal stochastic process as a controlled setting: constrained random walks on a two-dimensional lattice that must reach a fixed endpoint after a predetermined number of steps. Optimal prediction of this process solely depends on a sufficient vector determined by the walker's position relative to the target and the remaining time horizon; in other words, the probability distributions are parametrized by the world's geometry. We train decoder-only transformers on prefixes sampled from the exact distribution of these walks and compare their hidden activations to the analytically derived sufficient vectors. Across models and layers, the learned representations align strongly with the ground-truth predictive vectors and are often low-dimensional. This provides a concrete example in which world-model-like representations can be directly traced back to the predictive geometry of the data itself. Although demonstrated in a simplified toy system, the analysis suggests that geometric representations supporting optimal prediction may provide a useful lens for studying how neural networks internalize grammatical and other structural constraints.