🤖 AI Summary
This work addresses the challenge of balancing performance and computational cost in task-specific fine-tuning as data scales grow, where prevailing data selection strategies struggle to jointly account for sample importance, diversity, and inter-sample interactions. The authors propose CLIPPER, a training-free data selection framework that introduces in-context learning with single examples into data selection for the first time. By probing the responses of multimodal large language models to example-query pairs, CLIPPER disentangles parametric from world knowledge and constructs a core subset by matching the perplexity distribution of the original data. Requiring neither auxiliary scoring models nor heuristic clustering, CLIPPER achieves a 47% improvement in data efficiency on VRSBench and reduces training time by 37% on ScienceQA when applied to Qwen2.5-VL-7B and Llama-3.2-11B-Vision-Instruct, respectively, while matching the performance of full-data fine-tuning.
📝 Abstract
Injecting world knowledge into pretrained multimodal large language models (MLLMs) is essential for domain-specific applications. Task-specific fine-tuning achieves this by tailoring MLLMs to high-quality in-domain data but encounters scalability challenges as datasets grow, necessitating a trade-off between performance and computational overhead. Existing data selection methods rely on additional scoring models or heuristic clustering, failing to concentrate on both data importance and diversity. Moreover, both methods overlook the interplay among training samples. To address these limitations, we propose CLIPPER, a training-free data selection pipeline that separates parameter and world knowledge, and leverages in-context learning to probe model responses to different demonstration-query combinations. CLIPPER identifies coresets that mirror the original dataset's perplexity distribution, preserving critical samples while maintaining diversity. Experiments on two MLLMs and three datasets show that CLIPPER matches full fine-tuning performance with significantly lower costs: Qwen2.5-VL-7B attains 47% data efficiency on VRSBench, and Llama-3.2-11B-Vision-Instruct reduces ScienceQA training time by 37%.