Distributed Deep Learning using Stochastic Gradient Staleness

📅 2025-09-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Training deep neural networks (DNNs) is time-consuming, and synchronous backpropagation in distributed settings suffers from lock contention and synchronization bottlenecks. Method: This paper proposes a distributed training framework that integrates data parallelism with fully decoupled asynchronous backpropagation. It permits bounded gradient staleness—relaxing strict synchronization—and theoretically establishes convergence to critical points under non-convex objectives. Contribution/Results: The core innovation is a staleness-aware asynchronous update strategy that eliminates global synchronization barriers during backpropagation. Experiments on CIFAR-10 demonstrate substantial throughput improvement over synchronous baselines, while maintaining comparable model accuracy and convergence stability. The method thus achieves scalable, efficient, and robust distributed DNN training without compromising optimization fidelity.

Technology Category

Application Category

📝 Abstract
Despite the notable success of deep neural networks (DNNs) in solving complex tasks, the training process still remains considerable challenges. A primary obstacle is the substantial time required for training, particularly as high performing DNNs tend to become increasingly deep (characterized by a larger number of hidden layers) and require extensive training datasets. To address these challenges, this paper introduces a distributed training method that integrates two prominent strategies for accelerating deep learning: data parallelism and fully decoupled parallel backpropagation algorithm. By utilizing multiple computational units operating in parallel, the proposed approach enhances the amount of training data processed in each iteration while mitigating locking issues commonly associated with the backpropagation algorithm. These features collectively contribute to significant improvements in training efficiency. The proposed distributed training method is rigorously proven to converge to critical points under certain conditions. Its effectiveness is further demonstrated through empirical evaluations, wherein an DNN is trained to perform classification tasks on the CIFAR-10 dataset.
Problem

Research questions and friction points this paper is trying to address.

Accelerating deep neural network training time
Mitigating locking issues in backpropagation algorithm
Improving distributed training efficiency with parallelism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed training with data parallelism
Fully decoupled parallel backpropagation algorithm
Mitigates locking issues in backpropagation
🔎 Similar Papers
No similar papers found.
V
Viet Hoang Pham
Information Technology Department, Posts and Telecommunications Institute of Technology (PTIT), Hanoi, Vietnam
Hyo-Sung Ahn
Hyo-Sung Ahn
Professor, School of Mechanical Eng., GIST
Formation ControlDistributed CoordinationNetworked Control SystemsIterative Learning ControlAutonomous Systems