🤖 AI Summary
To address the high inference overhead, poor cache reusability, and performance–efficiency trade-offs in many-shot in-context learning (ICL) for long-context large language models (LLMs), this paper proposes an efficient demonstration selection method. The approach integrates semantic similarity retrieval of test samples with k-means cluster-center-guided representative example selection, augmented by a hierarchical selection mechanism and a cache-aware inference framework. Its core innovation lies in jointly optimizing demonstration relevance and structural representativeness while explicitly enabling computation cache reuse. Evaluated on multiple standard benchmarks, the method matches the performance of state-of-the-art baselines—including gradient-based and reinforcement learning–based selectors—while reducing inference cost by up to 10× compared to those methods and significantly outperforming random selection. This yields a scalable, low-overhead, high-performance solution for many-shot ICL in long-context settings.
📝 Abstract
Long-context large language models (LLMs) are able to process inputs containing up to several million tokens. In the scope of in-context learning (ICL), this translates into using hundreds/thousands of demonstrations in the input prompt, enabling many-shot ICL. In practice, a fixed set of demonstrations is often selected at random in many-shot settings due to (1) high inference costs, (2) the benefits of caching and reusing computations, and (3) the similar performance offered by this strategy compared to others when scaled. In this work, we propose two straightforward strategies for demonstration selection in many-shot ICL that improve performance with minimal computational overhead. Our first method combines a small number of demonstrations, selected based on their similarity to each test sample, with a disproportionately larger set of random demonstrations that are cached. The second strategy improves the first by replacing random demonstrations with those selected using centroids derived from test sample representations via k-means clustering. Our experiments with Gemini Pro and Flash across several datasets indicate that our strategies consistently outperform random selection and surpass or match the most performant selection approach while supporting caching and reducing inference cost by up to an order of magnitude. We also show that adjusting the proportion of demonstrations selected based on different criteria can balance performance and inference cost in many-shot ICL.