🤖 AI Summary
This work investigates “performance-critical tokens” (PCTs)—tokens in large language models’ in-context learning (ICL) that decisively influence task performance. Method: We propose a three-way token-role taxonomy—content, stopword, and template—and employ attention-layer representation ablation, multi-dimensional feature analysis (structural, semantic, and repetitiveness), and cross-task ICL evaluation. Contribution/Results: We provide the first empirical evidence that stopwords and template tokens are more performance-critical than content tokens; their criticality arises not from encoding semantics directly, but from aggregating and structuring content information. Crucially, PCTs’ information-integration pattern diverges counterintuitively from human attention mechanisms. Moreover, we establish causal links between token roles and ICL performance, offering a novel paradigm for probing ICL’s internal mechanisms and enabling controllable, role-aware interventions.
📝 Abstract
In-context learning (ICL) has emerged as an effective solution for few-shot learning with large language models (LLMs). However, how LLMs leverage demonstrations to specify a task and learn a corresponding computational function through ICL is underexplored. Drawing from the way humans learn from content-label mappings in demonstrations, we categorize the tokens in an ICL prompt into content, stopword, and template tokens. Our goal is to identify the types of tokens whose representations directly influence LLM's performance, a property we refer to as being performance-critical. By ablating representations from the attention of the test example, we find that the representations of informative content tokens have less influence on performance compared to template and stopword tokens, which contrasts with the human attention to informative words. We give evidence that the representations of performance-critical tokens aggregate information from the content tokens. Moreover, we demonstrate experimentally that lexical meaning, repetition, and structural cues are the main distinguishing characteristics of these tokens. Our work sheds light on how large language models learn to perform tasks from demonstrations and deepens our understanding of the roles different types of tokens play in large language models.