Online Importance Sampling for Stochastic Gradient Optimization

📅 2023-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low gradient estimation accuracy in stochastic gradient descent (SGD) and the high computational overhead and integration difficulty of conventional importance sampling, this paper proposes a lightweight online dynamic importance sampling method. Our approach introduces, for the first time, an importance metric based on the derivative of the loss function with respect to the network output, enabling real-time, mini-batch-level sample importance assessment without preprocessing or auxiliary models. By integrating gradient sensitivity analysis with online data pruning, the method supports dynamic mini-batch construction. Extensive experiments across diverse models and datasets demonstrate that our method significantly reduces training time on both classification and regression tasks, with negligible accuracy degradation. Moreover, it exhibits strong generalization capability and plug-and-play compatibility—requiring no architectural modifications or additional hyperparameters.
📝 Abstract
Machine learning optimization often depends on stochastic gradient descent, where the precision of gradient estimation is vital for model performance. Gradients are calculated from mini-batches formed by uniformly selecting data samples from the training dataset. However, not all data samples contribute equally to gradient estimation. To address this, various importance sampling strategies have been developed to prioritize more significant samples. Despite these advancements, all current importance sampling methods encounter challenges related to computational efficiency and seamless integration into practical machine learning pipelines. In this work, we propose a practical algorithm that efficiently computes data importance on-the-fly during training, eliminating the need for dataset preprocessing. We also introduce a novel metric based on the derivative of the loss w.r.t. the network output, designed for mini-batch importance sampling. Our metric prioritizes influential data points, thereby enhancing gradient estimation accuracy. We demonstrate the effectiveness of our approach across various applications. We first perform classification and regression tasks to demonstrate improvements in accuracy. Then, we show how our approach can also be used for online data pruning by identifying and discarding data samples that contribute minimally towards the training loss. This significantly reduce training time with negligible loss in the accuracy of the model.
Problem

Research questions and friction points this paper is trying to address.

Stochastic Gradient Descent
Gradient Accuracy Optimization
Important Sample Identification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online Importance Sampling
Gradient Accuracy Enhancement
Efficient Training Time
🔎 Similar Papers
No similar papers found.
C
Corentin Salaun
Max Planck Institute for Informatics, Saarland University, Saarbrücken, Germany
Xingchang Huang
Xingchang Huang
Max Planck Institute for Informatics, Saarland University, Saarbrücken, Germany
Iliyan Georgiev
Iliyan Georgiev
Adobe Research
Computer GraphicsGlobal IlluminationRay TracingMonte CarloStochastic Sampling
N
N. Mitra
Adobe Research, London, United Kingdom; Department of Computer Science, University College London, London, United Kingdom
Gurprit Singh
Gurprit Singh
Advanced Micro devices (AMD)
Generative ModelsMCMCGenerative Rendering