Add-One-In: Incremental Sample Selection for Large Language Models via a Choice-Based Greedy Paradigm

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly optimizing sample quality, diversity, and computational efficiency in large language model (LLM) training data selection, this paper proposes an incremental sample selection framework based on “selection comparison.” Unlike conventional single-sample scoring paradigms, our method employs the LLM itself as a discriminator to quantify each candidate sample’s marginal contribution—i.e., the performance gain achieved when adding it to the current subset—and applies an incremental greedy strategy for efficient, diversity-aware selection. Crucially, the framework avoids exhaustive global dataset traversal, substantially reducing computational overhead. Experiments across multiple benchmarks demonstrate that models trained on only 30–50% of the original training data achieve performance comparable to or exceeding that of full-dataset training and state-of-the-art baselines. Furthermore, we validate the framework’s generalizability and practical utility on large-scale medical corpora.

Technology Category

Application Category

📝 Abstract
Selecting high-quality and diverse training samples from extensive datasets plays a crucial role in reducing training overhead and enhancing the performance of Large Language Models (LLMs). However, existing studies fall short in assessing the overall value of selected data, focusing primarily on individual quality, and struggle to strike an effective balance between ensuring diversity and minimizing data point traversals. Therefore, this paper introduces a novel choice-based sample selection framework that shifts the focus from evaluating individual sample quality to comparing the contribution value of different samples when incorporated into the subset. Thanks to the advanced language understanding capabilities of LLMs, we utilize LLMs to evaluate the value of each option during the selection process. Furthermore, we design a greedy sampling process where samples are incrementally added to the subset, thereby improving efficiency by eliminating the need for exhaustive traversal of the entire dataset with the limited budget. Extensive experiments demonstrate that selected data from our method not only surpass the performance of the full dataset but also achieves competitive results with state-of-the-art (SOTA) studies, while requiring fewer selections. Moreover, we validate our approach on a larger medical dataset, highlighting its practical applicability in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Selecting high-quality, diverse training samples for LLMs
Balancing diversity and minimizing data traversal in sample selection
Improving efficiency and performance with incremental sample selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Choice-based sample selection framework
LLMs evaluate sample contribution value
Greedy incremental sampling process
🔎 Similar Papers
No similar papers found.
Z
Zhuo Li
Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen
Y
Yuhao Du
Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen
Xiaoqi Jiao
Xiaoqi Jiao
Huazhong University of Science and Technology
Large Language ModelsMultimodal LLMNLPDeep Learning
Yiwen Guo
Yiwen Guo
Research Scientist
Machine LearningDeep LearningImage Processing
Y
Yuege Feng
Birmingham City University
Xiang Wan
Xiang Wan
Shenzhen Research Institute of Big Data
BioinformaticsData MiningBig Data Analysis
A
Anningzhe Gao
Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen
Jinpeng Hu
Jinpeng Hu
Hefei University of Technology
natural language processingnamed entity recognitionsummarization