๐ค AI Summary
Backpropagation relies on two-pass computation and storage of intermediate activations, severely limiting training efficiency for large-scale models; while forward-mode automatic differentiation avoids these bottlenecks, existing gradient estimation methods suffer from high variance and substantial bias, hindering scalability. This paper proposes a novel backpropagation-free gradient estimation framework: it constructs low-bias guess directions by controllably modulating upstream Jacobian matrices and incorporates a biasโvariance trade-off analysis to achieve efficient gradient direction approximation in the forward pass. Theoretically, we establish that the intrinsic low-dimensional structure of neural network gradients critically governs estimation quality. Empirically, the method exhibits improved performance with increasing network width, demonstrating superior scalability. It enables lightweight training of ultra-large models, offering a new paradigm for scalable deep learning.
๐ Abstract
While backpropagation--reverse-mode automatic differentiation--has been extraordinarily successful in deep learning, it requires two passes (forward and backward) through the neural network and the storage of intermediate activations. Existing gradient estimation methods that instead use forward-mode automatic differentiation struggle to scale beyond small networks due to the high variance of the estimates. Efforts to mitigate this have so far introduced significant bias to the estimates, reducing their utility. We introduce a gradient estimation approach that reduces both bias and variance by manipulating upstream Jacobian matrices when computing guess directions. It shows promising results and has the potential to scale to larger networks, indeed performing better as the network width is increased. Our understanding of this method is facilitated by analyses of bias and variance, and their connection to the low-dimensional structure of neural network gradients.