Holdout-Loss-Based Data Selection for LLM Finetuning via In-Context Learning

πŸ“… 2025-10-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the dilution of supervisory signals by noisy data in large language model supervised fine-tuning (SFT), this paper proposes an efficient data selection framework based on holdout loss. The core innovation is the *In-Context Approximation* (ICA) method, which linearly approximates the gradient influence of individual samples within the context of a small holdout setβ€”without requiring model fine-tuning or reference models. We theoretically prove that ICA recovers the first-order update direction, enabling zero-training-overhead data value estimation and dynamic reweighting. ICA is broadly compatible with diverse alignment paradigms, including SFT, DPO, and SimPO. Extensive experiments across multiple models and datasets demonstrate consistent improvements in alignment performance, with negligible computational overhead. Moreover, ICA exhibits strong robustness to mainstream data selection strategies, degrading only slightly under rapid-shift heuristics.

Technology Category

Application Category

πŸ“ Abstract
Fine-tuning large pretrained language models is a common approach for aligning them with human preferences, but noisy or off-target examples can dilute supervision. While small, well-chosen datasets often match the performance of much larger ones, systematic and efficient ways to identify high-value training data remain underexplored. Many current methods rely on heuristics or expensive retraining. We present a theoretically grounded, resource-efficient framework for data selection and reweighting. At its core is an In-Context Approximation (ICA) that estimates the holdout loss a model would incur after training on a candidate example by conditioning on a small, curated holdout set in context. ICA requires no reference model and no additional finetuning. Under a local linearization, ICA is equivalent to a first-order update toward the holdout optimum, motivating its use as a proxy for data value. We derive per-example weights from ICA scores, dynamically reweighting gradient updates as model parameters evolve. Across SFT, DPO, and SimPO, and over diverse backbones and datasets, ICA-based reweighting consistently improves model alignment with minimal overhead. We analyze sensitivity to score update frequency and the choice of $k$ holdout examples for in-context demonstrations, and note limitations for rapidly drifting on-policy updates, highlighting directions for future work. Code and prompts will be released.
Problem

Research questions and friction points this paper is trying to address.

Selecting high-value training data efficiently for LLM fine-tuning
Estimating holdout loss without retraining using in-context learning
Dynamically reweighting examples to improve model alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-context approximation estimates holdout loss without retraining
Dynamic gradient reweighting based on per-example ICA scores
Framework improves alignment across SFT DPO SimPO methods
πŸ”Ž Similar Papers
No similar papers found.
Ling Zhang
Ling Zhang
Alibaba DAMO Academy USA
Medical Image AnalysisMedical Image ComputingMachine LearningImage Processing
X
Xianliang Yang
Microsoft Research Asia, Beijing, China
J
Juwon Yu
Korean KT, Seoul, Korea
P
Park Cheonyoung
Korean KT, Seoul, Korea
L
Lei Song
Microsoft Research Asia, Beijing, China
J
Jiang Bian
Microsoft Research Asia, Beijing, China