đ€ AI Summary
The mechanisms by which large language models (LLMs) encode knowledge during pretraining and retrieve it during inference remain poorly understood.
Method: We propose the *Functional Token Hypothesis*, positing that functional tokens (e.g., punctuation, conjunctions, particles) serve dual roles: during inference, they activate broad contextual features to guide generation; during pretraining, they drive knowledge accumulation and consolidation via next-token prediction of subsequent content tokens. We unify these roles through a memory-operation lens and formalize aćć mechanism linking memory retrieval and consolidation. Using bipartite graph analysis, feature visualization, and case studies, we empirically examine functional token behavior.
Contribution/Results: We find that a small set of functional tokens activates the majority of intermediate-layer features; moreover, training loss concentrates predominantly on content-token prediction following functional tokens. These results reveal a critical implicit pathway for knowledge organization in LLMs, offering a novel paradigm for interpretability and controllable generation.
đ Abstract
The remarkable success of large language models (LLMs) stems from their ability to consolidate vast amounts of knowledge into the memory during pre-training and to retrieve it from the memory during inference, enabling advanced capabilities such as knowledge memorization, instruction-following and reasoning. However, the mechanisms of memory retrieval and consolidation in LLMs remain poorly understood. In this paper, we propose the function token hypothesis to explain the workings of LLMs: During inference, function tokens activate the most predictive features from context and govern next token prediction (memory retrieval). During pre-training, predicting the next tokens (usually content tokens) that follow function tokens increases the number of learned features of LLMs and updates the model parameters (memory consolidation). Function tokens here roughly correspond to function words in linguistics, including punctuation marks, articles, prepositions, and conjunctions, in contrast to content tokens. We provide extensive experimental evidence supporting this hypothesis. Using bipartite graph analysis, we show that a small number of function tokens activate the majority of features. Case studies further reveal how function tokens activate the most predictive features from context to direct next token prediction. We also find that during pre-training, the training loss is dominated by predicting the next content tokens following function tokens, which forces the function tokens to select the most predictive features from context.