π€ AI Summary
In-context learning (ICL) for vision-language models (VLMs) critically depends on example selectionβa provably NP-hard problem. Existing strategies, including random sampling, similarity-based selection, or information-theoretic scoring, struggle to balance computational efficiency and effectiveness. To address this, we propose CoDR, the first framework to adapt the coreset paradigm to ICL example selection. CoDR constructs a diverse core subset via clustering-based pruning and introduces a two-stage retrieval mechanism that jointly optimizes example similarity to the query and mutual information among selected examples, under query-alignment constraints. The method comprises three key components: cluster pruning, diversity-aware subset construction, and collaborative dual-stage retrieval. Extensive experiments across multiple vision-language benchmarks demonstrate that CoDR consistently outperforms state-of-the-art baselines, achieving significant gains in both ICL accuracy and computational efficiency.
π Abstract
In-context learning (ICL) has emerged as a powerful paradigm for Large Visual Language Models (LVLMs), enabling them to leverage a few examples directly from input contexts. However, the effectiveness of this approach is heavily reliant on the selection of demonstrations, a process that is NP-hard. Traditional strategies, including random, similarity-based sampling and infoscore-based sampling, often lead to inefficiencies or suboptimal performance, struggling to balance both efficiency and effectiveness in demonstration selection. In this paper, we propose a novel demonstration selection framework named Coreset-based Dual Retrieval (CoDR). We show that samples within a diverse subset achieve a higher expected mutual information. To implement this, we introduce a cluster-pruning method to construct a diverse coreset that aligns more effectively with the query while maintaining diversity. Additionally, we develop a dual retrieval mechanism that enhances the selection process by achieving global demonstration selection while preserving efficiency. Experimental results demonstrate that our method significantly improves the ICL performance compared to the existing strategies, providing a robust solution for effective and efficient demonstration selection.