🤖 AI Summary
Existing LLM alignment methods rely on human preference data, yet preference quality is model-dependent—identical preference pairs may benefit one model while harming another. Current approaches, which employ external reward models or general-purpose LLMs to filter preference data, lack fine-grained assessment of individual data-point impact on target-model training dynamics.
Method: We propose a model-dependent individual data influence evaluation framework. It introduces the Truncated Influence Function (TIF) to quantify how each preference pair affects the target model’s training trajectory, and derives two TIF-correlated, efficiently computable proxy scoring functions. We further incorporate an error-complementary fusion mechanism to enhance selection robustness.
Contribution/Results: Experiments across multiple LLM families and mainstream alignment benchmarks demonstrate that our method achieves superior alignment performance using fewer preference samples, significantly outperforming baselines. Results validate both its effectiveness and strong cross-model generalization capability.
📝 Abstract
Large language model (LLM) alignment is typically achieved through learning from human preference comparisons, making the quality of preference data critical to its success. Existing studies often pre-process raw training datasets to identify valuable preference pairs using external reward models or off-the-shelf LLMs, achieving improved overall performance but rarely examining whether individual, selected data point is genuinely beneficial. We assess data quality through individual influence on validation data using our newly proposed truncated influence function (TIF), which mitigates the over-scoring present in traditional measures and reveals that preference data quality is inherently a property of the model. In other words, a data pair that benefits one model may harm another. This leaves the need to improve the preference data selection approaches to be adapting to specific models. To this end, we introduce two candidate scoring functions (SFs) that are computationally simpler than TIF and positively correlated with it. They are also model dependent and can serve as potential indicators of individual data quality for preference data selection. Furthermore, we observe that these SFs inherently exhibit errors when compared to TIF. To this end, we combine them to offset their diverse error sources, resulting in a simple yet effective data selection rule that enables the models to achieve a more precise selection of valuable preference data. We conduct experiments across diverse alignment benchmarks and various LLM families, with results demonstrating that better alignment performance can be achieved using less data, showing the generality of our findings and new methods.