Partial Forward Blocking: A Novel Data Pruning Paradigm for Lossless Training Acceleration

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High computational cost in large-scale training and the reliance of existing data pruning methods on gradients or surrogate models—introducing additional overhead—are critical challenges. To address these, we propose Partial Forward Blocking (PFB), a dynamic data pruning method that requires neither backpropagation nor surrogate models. PFB leverages shallow-layer features of the target model and introduces, for the first time, probability density estimation to quantify sample importance. It adaptively models feature distributions to prioritize rare yet information-rich samples. Evaluated on ImageNet, PFB achieves lossless acceleration: removing 40% of the training data reduces training time by 33% while improving top-1 classification accuracy by 0.5%. This outperforms state-of-the-art pruning techniques across multiple metrics. PFB establishes a novel paradigm for efficient large-model training—scalable, lightweight, and gradient-free—enabling substantial speedups without compromising model performance.

Technology Category

Application Category

📝 Abstract
The ever-growing size of training datasets enhances the generalization capability of modern machine learning models but also incurs exorbitant computational costs. Existing data pruning approaches aim to accelerate training by removing those less important samples. However, they often rely on gradients or proxy models, leading to prohibitive additional costs of gradient back-propagation and proxy model training. In this paper, we propose Partial Forward Blocking (PFB), a novel framework for lossless training acceleration. The efficiency of PFB stems from its unique adaptive pruning pipeline: sample importance is assessed based on features extracted from the shallow layers of the target model. Less important samples are then pruned, allowing only the retained ones to proceed with the subsequent forward pass and loss back-propagation. This mechanism significantly reduces the computational overhead of deep-layer forward passes and back-propagation for pruned samples, while also eliminating the need for auxiliary backward computations and proxy model training. Moreover, PFB introduces probability density as an indicator of sample importance. Combined with an adaptive distribution estimation module, our method dynamically prioritizes relatively rare samples, aligning with the constantly evolving training state. Extensive experiments demonstrate the significant superiority of PFB in performance and speed. On ImageNet, PFB achieves a 0.5% accuracy improvement and 33% training time reduction with 40% data pruned.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational costs in large dataset training
Eliminates need for proxy models and extra back-propagation
Dynamically prunes less important samples for efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive pruning using shallow layer features
Probability density as importance indicator
Eliminates proxy model and backward computations
🔎 Similar Papers
No similar papers found.
D
Dongyue Wu
National Key Laboratory of Multispectral Information Intelligent Processing Technology, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
Z
Zilin Guo
National Key Laboratory of Multispectral Information Intelligent Processing Technology, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology
Jialong Zuo
Jialong Zuo
Zhejiang University
Speech SynthesisVoice Conversion
Nong Sang
Nong Sang
Huazhong University of Science and Technology
Computer Vision and Pattern Recognition
C
Changxin Gao
National Key Laboratory of Multispectral Information Intelligent Processing Technology, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology