Model-agnostic Coreset Selection via LLM-based Concept Bottlenecks

📅 2025-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing core-set selection methods rely on full downstream model training, incurring high computational costs and poor interpretability, while failing to disentangle intrinsic sample difficulty from model-specific difficulty. To address this, we propose a model-agnostic and interpretable core-set selection framework: leveraging large language models (LLMs) to automatically extract human-understandable semantic concepts and construct a lightweight linear concept bottleneck; then quantifying sample learning difficulty via alignment between visual features and concept representations—enabling label-free, training-free difficulty estimation. Our approach is the first to integrate LLM-driven concept bottlenecks into core-set selection, supporting unsupervised difficulty modeling and hierarchical sampling. Evaluated on CIFAR-10/100 and ImageNet-1K, it significantly outperforms random sampling and, under high pruning ratios, matches or surpasses state-of-the-art methods based on training dynamics—without requiring any downstream training.

Technology Category

Application Category

📝 Abstract
Coreset Selection (CS) identifies a subset of training data that achieves model performance comparable to using the entire dataset. Many state-of-the-art CS methods, select coresets using scores whose computation requires training the downstream model on the entire dataset and recording changes in its behavior on samples as it trains (training dynamics). These scores are inefficient to compute and hard to interpret as they do not indicate whether a sample is difficult to learn in general or only for a specific model. Our work addresses these challenges by proposing an interpretable score that gauges a sample's difficulty using human-understandable textual attributes (concepts) independent of any downstream model. Specifically, we measure the alignment between a sample's visual features and concept bottlenecks, derived via large language models, by training a linear concept bottleneck layer and compute the sample's difficulty score using it. We then use this score and a stratified sampling strategy to identify the coreset. Crucially, our score is efficiently computable without training the downstream model on the full dataset even once, leads to high-performing coresets for various downstream models, and is computable even for an unlabeled dataset. Through experiments on CIFAR-10, CIFAR-100, and ImageNet-1K, we show our coresets outperform random subsets, even at high pruning rates, and achieve model performance comparable to or better than coresets found by training dynamics-based methods.
Problem

Research questions and friction points this paper is trying to address.

Efficient coreset selection without full model training
Interpretable difficulty scores using textual attributes
High-performing coresets for various downstream models
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based concept bottlenecks
Model-agnostic coreset selection
Efficient difficulty score computation
🔎 Similar Papers
No similar papers found.