Learning What Matters: Prioritized Concept Learning via Relative Error-driven Sample Selection

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the bottlenecks in instruction tuning of vision-language models (VLMs)—namely, heavy reliance on large-scale annotated data, high computational cost, and human-crafted priors—this paper proposes PROGRESS: a gradient-free, annotation-free, and external-supervision-free dynamic learning framework. Its core innovation is a novel relative-error-driven progressive concept learning mechanism, enabling fine-grained skill-level learning progress tracking and dynamic difficulty-aware sample prioritization. Through lightweight online querying, PROGRESS automatically identifies the most learnable and information-rich samples to construct adaptive training sequences. Evaluated on multi-scale instruction-following benchmarks, PROGRESS significantly outperforms state-of-the-art methods while drastically reducing both training data volume and annotation requirements. Moreover, it demonstrates strong cross-architecture generalization, successfully transferring to larger VLMs without architectural modification.

Technology Category

Application Category

📝 Abstract
Instruction tuning has been central to the success of recent vision-language models (VLMs), but it remains expensive-requiring large-scale datasets, high-quality annotations, and large compute budgets. We propose PRioritized cOncept learninG via Relative Error-driven Sample Selection (PROGRESS), a data- and compute-efficient framework that enables VLMs to dynamically select what to learn next based on their evolving needs during training. At each stage, the model tracks its learning progress across skills and selects the most informative samples-those it has not already mastered and that are not too difficult to learn at the current stage of training. This strategy effectively controls skill acquisition and the order in which skills are learned. Specifically, we sample from skills showing the highest learning progress, prioritizing those with the most rapid improvement. Unlike prior methods, PROGRESS requires no upfront answer annotations, queries answers only on a need basis, avoids reliance on additional supervision from auxiliary VLMs, and does not require compute-heavy gradient computations for data selection. Experiments across multiple instruction-tuning datasets of varying scales demonstrate that PROGRESS consistently outperforms state-of-the-art baselines with much less data and supervision. Additionally, we show strong cross-architecture generalization and transferability to larger models, validating PROGRESS as a scalable solution for efficient learning.
Problem

Research questions and friction points this paper is trying to address.

Dynamic sample selection for efficient VLM training
Prioritizing high-progress skills during instruction tuning
Reducing data and compute costs in concept learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic sample selection based on learning progress
Prioritizes skills with rapid improvement
No upfront annotations or heavy computations
🔎 Similar Papers
No similar papers found.