In-Context Learning with Iterative Demonstration Selection

📅 2023-10-15
🏛️ Conference on Empirical Methods in Natural Language Processing
📈 Citations: 31
Influential: 1
📄 PDF
🤖 AI Summary
Existing in-context learning (ICL) methods suffer from sensitivity to demonstration selection and poor generalization of static, single-dimensional selection strategies. To address this, we propose Iterative Demonstration Selection (IDS), a novel framework that dynamically integrates semantic relevance and diversity. IDS introduces, for the first time, an iterative feedback mechanism grounded in zero-shot chain-of-thought (Zero-shot-CoT) reasoning paths, enabling task-adaptive, multi-dimensional demonstration filtering. It further incorporates iterative context reconstruction and reasoning-path-driven semantic matching, augmented by majority-voting ensemble to enhance stability. Evaluated across diverse tasks—including logical reasoning, question answering, and topic classification—IDS consistently outperforms state-of-the-art ICL demonstration selection approaches. It demonstrates superior robustness to input perturbations and stronger cross-task generalization capability, establishing a new benchmark for adaptive, reasoning-aware demonstration selection in few-shot ICL.
📝 Abstract
Spurred by advancements in scale, large language models (LLMs) have demonstrated strong few-shot learning ability via in-context learning (ICL). However, the performance of ICL has been shown to be highly sensitive to the selection of few-shot demonstrations. Selecting the most suitable examples as context remains an ongoing challenge and an open problem. Existing literature has highlighted the importance of selecting examples that are diverse or semantically similar to the test sample while ignoring the fact that the optimal selection dimension, i.e., diversity or similarity, is task-specific. Based on how the test sample is answered, we propose Iterative Demonstration Selection (IDS) to leverage the merits of both dimensions. Using zero-shot chain-of-thought reasoning (Zero-shot-CoT), IDS iteratively selects examples that are diverse but still strongly correlated with the test sample as ICL demonstrations. Specifically, IDS applies Zero-shot-CoT to the test sample before demonstration selection. The output reasoning path is then used to choose demonstrations that are prepended to the test sample for inference. The generated answer is followed by its corresponding reasoning path for extracting a new set of demonstrations in the next iteration. After several iterations, IDS adopts majority voting to obtain the final result. Through extensive experiments on tasks including reasoning, question answering, and topic classification, we demonstrate that IDS can consistently outperform existing ICL demonstration selection methods.
Problem

Research questions and friction points this paper is trying to address.

Few-shot Learning
Large Language Models
Example Selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative Example Selection (IDS)
Zero-shot Chain-of-Thought
Large Language Model Optimization
🔎 Similar Papers
Chengwei Qin
Chengwei Qin
HKUST(GZ), NTU
LLMNLP
Aston Zhang
Aston Zhang
OpenAI
Machine LearningLarge Language Models
A
Anirudh Dagar
Amazon Web Services
W
Wenming Ye
Amazon Web Services