DIVEBATCH: Accelerating Model Training Through Gradient-Diversity Aware Batch Size Adaptation

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the trade-off between computational efficiency and generalization performance in large-scale deep neural network training caused by fixed batch sizes, this paper proposes DiveBatch, a gradient-diversity-aware adaptive batch-size SGD algorithm. Its core innovation lies in the first use of gradient diversity as a dynamic feedback signal to adaptively adjust the batch size during training: large batches accelerate convergence in early stages, while smaller batches enhance generalization in later stages. DiveBatch seamlessly integrates into the standard SGD framework without requiring additional hyperparameter tuning. Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that DiveBatch achieves 1.06–5.0× faster convergence than standard SGD and AdaBatch, with only marginal accuracy degradation (<0.5%). This significantly improves the Pareto frontier between computational efficiency and convergence performance.

Technology Category

Application Category

📝 Abstract
The goal of this paper is to accelerate the training of machine learning models, a critical challenge since the training of large-scale deep neural models can be computationally expensive. Stochastic gradient descent (SGD) and its variants are widely used to train deep neural networks. In contrast to traditional approaches that focus on tuning the learning rate, we propose a novel adaptive batch size SGD algorithm, DiveBatch, that dynamically adjusts the batch size. Adapting the batch size is challenging: using large batch sizes is more efficient due to parallel computation, but small-batch training often converges in fewer epochs and generalizes better. To address this challenge, we introduce a data-driven adaptation based on gradient diversity, enabling DiveBatch to maintain the generalization performance of small-batch training while improving convergence speed and computational efficiency. Gradient diversity has a strong theoretical justification: it emerges from the convergence analysis of SGD. Evaluations of DiveBatch on synthetic and CiFar-10, CiFar-100, and Tiny-ImageNet demonstrate that DiveBatch converges significantly faster than standard SGD and AdaBatch (1.06 -- 5.0x), with a slight trade-off in performance.
Problem

Research questions and friction points this paper is trying to address.

Accelerate training of machine learning models
Dynamically adjust batch size using gradient diversity
Maintain generalization while improving convergence speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamically adjusts batch size using gradient diversity
Maintains generalization while improving convergence speed
Outperforms standard SGD and AdaBatch in speed
🔎 Similar Papers
No similar papers found.
Yuen Chen
Yuen Chen
University of Illinois at Urbana-Champaign
Machine LearningCausalityTrustworthy ML
Y
Yian Wang
University of Illinois at Urbana-Champaign
H
Hari Sundaram
University of Illinois at Urbana-Champaign