🤖 AI Summary
This paper addresses the interpretability challenge posed by the opaque internal mechanisms of large language models (LLMs) by proposing a local intrinsic dimensionality (LID)-based geometric analysis framework for latent space dynamics during pretraining and fine-tuning. Methodologically, it integrates local principal component analysis, adaptive neighborhood estimation, and multi-task validation—spanning dialogue state tracking, sentiment classification, and arithmetic reasoning—to quantify context embedding space dimensionality shifts. Key contributions include: (i) the first use of LID as a dynamic metric to quantitatively predict training saturation, onset of overfitting, and the grokking critical point; and (ii) the discovery that a decline in mean LID consistently precedes performance leaps—revealing a universal “dimensionality reduction precedes performance gain” principle. This provides an interpretable, geometrically grounded, and quantifiable basis for model diagnostics and targeted fine-tuning.
📝 Abstract
Understanding the internal mechanisms of large language models (LLMs) remains a challenging and complex endeavor. Even fundamental questions, such as how fine-tuning affects model behavior, often require extensive empirical evaluation. In this paper, we introduce a novel perspective based on the geometric properties of contextual latent embeddings to study the effects of training and fine-tuning. To that end, we measure the local dimensions of a contextual language model's latent space and analyze their shifts during training and fine-tuning. We show that the local dimensions provide insights into the model's training dynamics and generalization ability. Specifically, the mean of the local dimensions predicts when the model's training capabilities are exhausted, as exemplified in a dialogue state tracking task, overfitting, as demonstrated in an emotion recognition task, and grokking, as illustrated with an arithmetic task. Furthermore, our experiments suggest a practical heuristic: reductions in the mean local dimension tend to accompany and predict subsequent performance gains. Through this exploration, we aim to provide practitioners with a deeper understanding of the implications of fine-tuning on embedding spaces, facilitating informed decisions when configuring models for specific applications. The results of this work contribute to the ongoing discourse on the interpretability, adaptability, and generalizability of LLMs by bridging the gap between intrinsic model mechanisms and geometric properties in the respective embeddings.