Towards the Effect of Examples on In‐Context Learning: A Theoretical Case Study

📅 2024-10-12
🏛️ Stat
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the interplay—both synergistic and competitive—between pretraining knowledge and in-context examples in in-context learning (ICL), particularly when they conflict, and examines how example label frequency and asymmetric label noise affect prediction accuracy. Focusing on binary classification, we introduce the first analytically tractable Gaussian mixture extension model that jointly characterizes the effects of pretrained priors, number of in-context examples, class imbalance, and asymmetric label noise. We theoretically establish a critical example-count threshold: below it, pretraining knowledge dominates and is reinforced; above it, predictions become example-driven. We further reveal a pronounced degradation in minority-class performance and demonstrate class-dependent sensitivity to label noise. All theoretical findings are rigorously validated through both controlled simulations and real-world benchmarks. This work provides the first quantitative analytical framework for modeling ICL mechanisms, enabling principled diagnosis and design of in-context inference strategies.

Technology Category

Application Category

📝 Abstract
In‐context learning (ICL) has emerged as a powerful capability for large language models (LLMs) to adapt to downstream tasks by leveraging a few (demonstration) examples. Despite its effectiveness, the mechanism behind ICL remains underexplored. To better understand how ICL integrates the examples with the knowledge learned by the LLM during pre‐training (i.e., pre‐training knowledge) and how the examples impact ICL, this paper conducts a theoretical study in binary classification tasks. In particular, we introduce a probabilistic model extending from the Gaussian mixture model to exactly quantify the impact of pre‐training knowledge, label frequency and label noise on the prediction accuracy. Based on our analysis, when the pre‐training knowledge contradicts the knowledge in the examples, whether ICL prediction relies more on the pre‐training knowledge or the examples depends on the number of examples. In addition, the label frequency and label noise of the examples both affect the accuracy of the ICL prediction, where the minor class has a lower accuracy, and how the label noise impacts the accuracy is determined by the specific noise level of the two classes. Extensive simulations are conducted to verify the correctness of the theoretical results, and real‐data experiments also align with the theoretical insights. Our work reveals the role of pre‐training knowledge and examples in ICL, offering a deeper understanding of LLMs' behaviours in classification tasks.
Problem

Research questions and friction points this paper is trying to address.

Explores how examples influence in-context learning in LLMs
Quantifies impact of pre-training knowledge and label noise
Analyzes when ICL relies on examples vs pre-training knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends Gaussian mixture model for ICL analysis
Quantifies impact of pre-training knowledge and examples
Studies label frequency and noise effects on accuracy