DropBP: Accelerating Fine-Tuning of Large Language Models by Dropping Backward Propagation

๐Ÿ“… 2024-02-27
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high computational overhead and activation memory bottlenecks induced by backpropagation in large language model (LLM) fine-tuning, this paper proposes LayerDropBackpropโ€”a stochastic layer-pruning backpropagation method that adaptively sets layer-dropping rates based on layer sensitivity. During backward pass, it dynamically skips non-critical layers, effectively training shallow residual-connected submodules. LayerDropBackprop is orthogonal to and compatible with various parameter-efficient fine-tuning (PEFT) techniques. Experiments demonstrate that, without sacrificing model accuracy, LayerDropBackprop reduces training time by 44%, accelerates convergence by 1.5ร—, increases maximum sequence length supported on a single A100 GPU by 6.2ร—, and improves throughput by 79% on A100 and 117% on Gaudi2 hardware, respectively.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) have achieved significant success across various domains. However, training these LLMs typically involves substantial memory and computational costs during both forward and backward propagation. While parameter-efficient fine-tuning (PEFT) considerably reduces the training memory associated with parameters, it does not address the significant computational costs and activation memory. In this paper, we propose Dropping Backward Propagation (DropBP), a novel approach designed to reduce computational costs and activation memory while maintaining accuracy. DropBP randomly drops layers during backward propagation, which is essentially equivalent to training shallow submodules generated by undropped layers and residual connections. Additionally, DropBP calculates the sensitivity of each layer to assign an appropriate drop rate, thereby stabilizing the training process. DropBP is not only applicable to full fine-tuning but can also be orthogonally integrated with all types of PEFT by dropping layers during backward propagation. Specifically, DropBP can reduce training time by 44% with comparable accuracy to the baseline, accelerate convergence to the same perplexity by 1.5x, and enable training with a sequence length 6.2x larger on a single NVIDIA-A100 GPU. Furthermore, our DropBP enabled a throughput increase of 79% on a NVIDIA A100 GPU and 117% on an Intel Gaudi2 HPU. The code is available at https://github.com/WooSunghyeon/dropbp.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
High Computational Demand
Memory Requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

DropBP
Layer Dropping
Training Acceleration
๐Ÿ”Ž Similar Papers
No similar papers found.