🤖 AI Summary
To address the challenge of data silos in collaborative settings—where raw data sharing is prohibited and cross-domain data quality assessment is infeasible—this paper proposes a training-dynamics-based data quality evaluation method. Specifically, it introduces the cumulative inner-product trace of per-sample gradients over training iterations as a novel, interpretable measure of sample influence, integrated with a lightweight anchored dataset to enable high-quality sample selection across private domains without data exchange. The approach is compatible with federated learning and model fusion paradigms, yielding a scalable, heterogeneous-domain-aware evaluation framework. Experiments on real-world private datasets from healthcare, finance, and multilingual domains demonstrate that selected samples significantly improve large language model fine-tuning performance, outperforming state-of-the-art data selection baselines.
📝 Abstract
Recent research has highlighted the importance of data quality in scaling large language models (LLMs). However, automated data quality control faces unique challenges in collaborative settings where sharing is not allowed directly between data silos. To tackle this issue, this paper proposes a novel data quality control technique based on the notion of data influence on the training dynamics of LLMs, that high quality data are more likely to have similar training dynamics to the anchor dataset. We then leverage the influence of the training dynamics to select high-quality data from different private domains, with centralized model updates on the server side in a collaborative training fashion by either model merging or federated learning. As for the data quality indicator, we compute the per-sample gradients with respect to the private data and the anchor dataset, and use the trace of the accumulated inner products as a measurement of data quality. In addition, we develop a quality control evaluation tailored for collaborative settings with heterogeneous domain data. Experiments show that training on the high-quality data selected by our method can often outperform other data selection methods for collaborative fine-tuning of LLMs, across diverse private domain datasets, in medical, multilingual and financial settings. Our code is released at github.com/Ryan0v0/CLUES.