One size doesn't fit all: Predicting the Number of Examples for In-Context Learning

📅 2024-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the performance bottleneck in in-context learning (ICL) caused by fixed k-shot example counts, this paper proposes a dynamic k-shot selection method that adaptively predicts the optimal number of demonstrations per input instance. The core innovation is the first multi-label classifier that explicitly models the mapping between the number of examples *k* and prediction correctness. It integrates three key components: similarity-based demonstration retrieval, k-shot confidence modeling, and meta-training grounded in text classification benchmarks. By abandoning the “one-size-fits-all” static *k* setting, the method enables instance-level fine-grained adaptation. Extensive evaluation across multiple standard text classification benchmarks demonstrates substantial improvements over conventional ICL—achieving up to a 17% absolute accuracy gain—thereby empirically validating the critical role of dynamically calibrated example scale in few-shot inference efficacy.

Technology Category

Application Category

📝 Abstract
In-context learning (ICL) refers to the process of adding a small number of localized examples from a training set of labelled data to an LLM's prompt with an objective to effectively control the generative process seeking to improve the downstream task performance. Existing ICL approaches use an identical number of examples (a pre-configured hyper-parameter) for each data instance. Our work alleviates the limitations of this 'one fits all' approach by dynamically predicting the number of examples for each data instance to be used in few-shot inference with LLMs. In particular, we employ a multi-label classifier, the parameters of which are fitted using a training set, where the label for each instance in this training set indicates if using a specific value of k (number of most similar examples from 0 up to a maximum value) leads to correct k-shot downstream predictions. Our experiments on a number of text classification benchmarks show that AICL substantially outperforms standard ICL by up to 17%.
Problem

Research questions and friction points this paper is trying to address.

Incremental Contextual Learning
Optimal Number
Small Examples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Example Number
Contextual Learning
Accuracy Improvement
🔎 Similar Papers
No similar papers found.