π€ AI Summary
To address the dilution of supervisory signals by noisy data in large language model supervised fine-tuning (SFT), this paper proposes an efficient data selection framework based on holdout loss. The core innovation is the *In-Context Approximation* (ICA) method, which linearly approximates the gradient influence of individual samples within the context of a small holdout setβwithout requiring model fine-tuning or reference models. We theoretically prove that ICA recovers the first-order update direction, enabling zero-training-overhead data value estimation and dynamic reweighting. ICA is broadly compatible with diverse alignment paradigms, including SFT, DPO, and SimPO. Extensive experiments across multiple models and datasets demonstrate consistent improvements in alignment performance, with negligible computational overhead. Moreover, ICA exhibits strong robustness to mainstream data selection strategies, degrading only slightly under rapid-shift heuristics.
π Abstract
Fine-tuning large pretrained language models is a common approach for aligning them with human preferences, but noisy or off-target examples can dilute supervision. While small, well-chosen datasets often match the performance of much larger ones, systematic and efficient ways to identify high-value training data remain underexplored. Many current methods rely on heuristics or expensive retraining. We present a theoretically grounded, resource-efficient framework for data selection and reweighting. At its core is an In-Context Approximation (ICA) that estimates the holdout loss a model would incur after training on a candidate example by conditioning on a small, curated holdout set in context. ICA requires no reference model and no additional finetuning. Under a local linearization, ICA is equivalent to a first-order update toward the holdout optimum, motivating its use as a proxy for data value. We derive per-example weights from ICA scores, dynamically reweighting gradient updates as model parameters evolve. Across SFT, DPO, and SimPO, and over diverse backbones and datasets, ICA-based reweighting consistently improves model alignment with minimal overhead. We analyze sensitivity to score update frequency and the choice of $k$ holdout examples for in-context demonstrations, and note limitations for rapidly drifting on-policy updates, highlighting directions for future work. Code and prompts will be released.