🤖 AI Summary
Dynamic data pruning struggles to efficiently estimate per-sample loss under complex models or loss functions, limiting its practical applicability. This work proposes the Batch Loss Score (BLS), which treats batch-level losses as noisy observations of individual sample losses and employs exponential moving average (EMA) to filter out noise induced by varying batch compositions, thereby assigning each sample an importance score. Requiring only three lines of code for integration and a single line for proxy adaptation, BLS is theoretically grounded in a low-pass filtering perspective that ensures its effectiveness. Evaluated across 14 datasets, 11 tasks, and 18 models, BLS achieves lossless pruning of 20%–50% of training samples, substantially improving training efficiency.
📝 Abstract
Dynamic data pruning accelerates deep learning by selectively omitting less informative samples during training. While per-sample loss is a common importance metric, obtaining it can be challenging or infeasible for complex models or loss functions, often requiring significant implementation effort. This work proposes the Batch Loss Score (BLS), a computationally efficient alternative using an Exponential Moving Average (EMA) of readily available batch losses to assign scores to individual samples. We frame the batch loss, from the perspective of a single sample, as a noisy measurement of its scaled individual loss, with noise originating from stochastic batch composition. It is formally shown that the EMA mechanism functions as a first-order low-pass filter, attenuating high-frequency batch composition noise. This yields a score approximating the smoothed and persistent contribution of the individual sample to the loss, providing a theoretical grounding for BLS as a proxy for sample importance. BLS demonstrates remarkable code integration simplicity (\textbf{three-line injection}) and readily adapts existing per-sample loss-based methods (\textbf{one-line proxy}). Its effectiveness is demonstrated by enhancing two such methods to losslessly prune \textbf{20\%-50\%} of samples across \textit{14 datasets}, \textit{11 tasks} and \textit{18 models}, highlighting its utility and broad applicability, especially for complex scenarios where per-sample loss is difficult to access. Code is available at https://github.com/mrazhou/BLS.