🤖 AI Summary
To address the lack of principled methods for demonstration selection in multi-example in-context learning (ICL), this paper proposes a gradient-matching-based demonstration selection mechanism. It estimates how few-shot demonstrations align—both in direction and magnitude—with the full fine-tuning gradient in an implicit gradient space, thereby approximating the optimization dynamics of full fine-tuning with minimal examples. This work is the first to introduce fine-tuning gradient alignment into ICL demonstration selection, overcoming limitations of random sampling and instance retrieval while establishing optimization-level comparability between ICL and fine-tuning. The method integrates parameter-efficient gradient estimation, cross-model transfer (using small models to guide large-model ICL), and multi-task reweighting. Evaluated across nine datasets and shot ranges (4–128), it consistently outperforms random selection: achieving +4% average accuracy gain on open-weight models (e.g., Qwen2.5-72B, Llama3-70B) and +2% on five proprietary large language models.
📝 Abstract
In-Context Learning (ICL) empowers Large Language Models (LLMs) for rapid task adaptation without Fine-Tuning (FT), but its reliance on demonstration selection remains a critical challenge. While many-shot ICL shows promising performance through scaled demonstrations, the selection method for many-shot demonstrations remains limited to random selection in existing work. Since the conventional instance-level retrieval is not suitable for many-shot scenarios, we hypothesize that the data requirements for in-context learning and fine-tuning are analogous. To this end, we introduce a novel gradient matching approach that selects demonstrations by aligning fine-tuning gradients between the entire training set of the target task and the selected examples, so as to approach the learning effect on the entire training set within the selected examples. Through gradient matching on relatively small models, e.g., Qwen2.5-3B or Llama3-8B, our method consistently outperforms random selection on larger LLMs from 4-shot to 128-shot scenarios across 9 diverse datasets. For instance, it surpasses random selection by 4% on Qwen2.5-72B and Llama3-70B, and by around 2% on 5 closed-source LLMs. This work unlocks more reliable and effective many-shot ICL, paving the way for its broader application.