🤖 AI Summary
This work addresses the data-centric interpretability of language models in next-token prediction. Building upon the Representer Theorem, we propose a binary decomposition framework distinguishing *supporting samples*—those that promote the prediction—from *non-supporting samples*—those that suppress it. We first demonstrate that supporting samples reflect an intrinsic model property, reliably predictable from input representations even prior to training. Crucially, we reveal that non-supporting samples grow increasingly important with network depth and play a decisive role in regularization, intermediate representation learning, and generalization. Through layer-wise importance analysis and quantitative attribution, we establish their deep regulatory mechanism on representation quality. Our findings introduce a novel dimension of data-decision interaction interpretability, bridging dataset influence with internal model dynamics.
📝 Abstract
Language models excel in various tasks by making complex decisions, yet understanding the rationale behind these decisions remains a challenge. This paper investigates emph{data-centric interpretability} in language models, focusing on the next-word prediction task. Using representer theorem, we identify two types of emph{support samples}-those that either promote or deter specific predictions. Our findings reveal that being a support sample is an intrinsic property, predictable even before training begins. Additionally, while non-support samples are less influential in direct predictions, they play a critical role in preventing overfitting and shaping generalization and representation learning. Notably, the importance of non-support samples increases in deeper layers, suggesting their significant role in intermediate representation formation.These insights shed light on the interplay between data and model decisions, offering a new dimension to understanding language model behavior and interpretability.