Accelerated Training through Iterative Gradient Propagation Along the Residual Path

📅 2025-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The inherent sequentiality of backpropagation in deep models severely limits training efficiency, particularly in residual architectures such as ResNet, Transformer, and RNN. To address this, we propose Highway-BP—a novel algorithm that, for the first time, decomposes gradients along residual connections into a multi-path additive form, enabling approximate backpropagation via iterative gradient accumulation and inter-layer parallel backward passes. Highway-BP requires no modification to the forward pass and is natively compatible with standard residual models. Theoretical analysis guarantees gradient consistency, and an approximation-optimization strategy is introduced to balance accuracy and efficiency. Experiments across image classification and machine translation demonstrate that Highway-BP achieves 1.8–2.4× speedup in training time while preserving model accuracy (accuracy degradation <0.3%), effectively alleviating the backpropagation bottleneck in deep networks.

Technology Category

Application Category

📝 Abstract
Despite being the cornerstone of deep learning, backpropagation is criticized for its inherent sequentiality, which can limit the scalability of very deep models. Such models faced convergence issues due to vanishing gradient, later resolved using residual connections. Variants of these are now widely used in modern architecture. However, the computational cost of backpropagation remains a major burden, accounting for most of the training time. Taking advantage of residual-like architectural designs, we introduce Highway backpropagation, a parallelizable iterative algorithm that approximates backpropagation, by alternatively i) accumulating the gradient estimates along the residual path, and ii) backpropagating them through every layer in parallel. This algorithm is naturally derived from a decomposition of the gradient as the sum of gradients flowing through all paths and is adaptable to a diverse set of common architectures, ranging from ResNets and Transformers to recurrent neural networks. Through an extensive empirical study on a large selection of tasks and models, we evaluate Highway-BP and show that major speedups can be achieved with minimal performance degradation.
Problem

Research questions and friction points this paper is trying to address.

Backpropagation Efficiency
Deep Learning Models
Residual Connections
Innovation

Methods, ideas, or system contributions that make the work stand out.

Highway Backpropagation
Parallel Gradient Estimation
Deep Learning Acceleration
🔎 Similar Papers
2024-06-30arXiv.orgCitations: 1