Refract ICL: Rethinking Example Selection in the Era of Million-Token Models

📅 2025-06-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the failure of conventional in-context learning (ICL) example selection strategies under million-token contexts—particularly in state-of-the-art long-context LMs like Gemini 1.5 Pro—this paper proposes Refract ICL. Our method integrates retrieval-augmented filtering, zero-shot prediction error modeling, example repetition scheduling, and attention-guided reweighting to achieve the first semantic-aware, dynamic reweighting of ICL examples. Crucially, it leverages prediction confidence signals to steer model attention toward high-difficulty examples, transcending the limitations of traditional similarity-and-diversity paradigms. Evaluated on few-class tasks, Refract ICL achieves up to a 12.7% accuracy gain over current SOTA methods. It establishes a scalable, interpretable framework for ICL in ultra-long-context settings, offering principled guidance for example selection beyond static heuristics.

Technology Category

Application Category

📝 Abstract
The emergence of long-context large language models (LLMs) has enabled the use of hundreds, or even thousands, of demonstrations for in-context learning (ICL) - a previously impractical regime. This paper investigates whether traditional ICL selection strategies, which balance the similarity of ICL examples to the test input (using a text retriever) with diversity within the ICL set, remain effective when utilizing a large number of demonstrations. Our experiments demonstrate that, while longer contexts can accommodate more examples, simply increasing the number of demonstrations does not guarantee improved performance. Smart ICL selection remains crucial, even with thousands of demonstrations. To further enhance ICL in this setting, we introduce Refract ICL, a novel ICL selection algorithm specifically designed to focus LLM attention on challenging examples by strategically repeating them within the context and incorporating zero-shot predictions as error signals. Our results show that Refract ICL significantly improves the performance of extremely long-context models such as Gemini 1.5 Pro, particularly on tasks with a smaller number of output classes.
Problem

Research questions and friction points this paper is trying to address.

Evaluates traditional ICL selection strategies for long-context LLMs
Assesses impact of increasing demonstrations on ICL performance
Introduces Refract ICL to enhance attention on challenging examples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Strategic repetition of challenging examples
Zero-shot predictions as error signals
Optimized for long-context LLMs
🔎 Similar Papers
No similar papers found.