๐ค AI Summary
In large-scale Transformer training with tensor parallelism (TP), frequent all-reduce communications between multi-head attention (MHA) and MLP modules per layer impose a severe efficiency bottleneck.
Method: We propose FAL, the first architecture that replaces conventional inter-module activation signals with the output of the initial attention layerโenabling a restructured data flow that eliminates all-reduce operations between MHA and MLP within each layer and allows their full parallel execution. We further introduce FAL+, which incorporates normalized attention enhancement and output redirection to improve representational capacity without incurring additional communication overhead.
Results: Experiments show FAL achieves up to 44% speedup in multi-GPU training and 1.18ร higher single-GPU throughput versus baseline GPT, while attaining lower perplexity. FAL+ further reduces perplexity, demonstrating that communication elimination and model quality improvement are jointly attainable.
๐ Abstract
As training billion-scale transformers becomes increasingly common, employing multiple distributed GPUs along with parallel training methods has become a standard practice. However, existing transformer designs suffer from significant communication overhead, especially in Tensor Parallelism (TP), where each block's MHA-MLP connection requires an all-reduce communication. Through our investigation, we show that the MHA-MLP connections can be bypassed for efficiency, while the attention output of the first layer can serve as an alternative signal for the bypassed connection. Motivated by the observations, we propose FAL (First Attentions Last), an efficient transformer architecture that redirects the first MHA output to the MLP inputs of the following layers, eliminating the per-block MHA-MLP connections. This removes the all-reduce communication and enables parallel execution of MHA and MLP on a single GPU. We also introduce FAL+, which adds the normalized first attention output to the MHA outputs of the following layers to augment the MLP input for the model quality. Our evaluation shows that FAL reduces multi-GPU training time by up to 44%, improves single-GPU throughput by up to 1.18x, and achieves better perplexity compared to the baseline GPT. FAL+ achieves even lower perplexity without increasing the training time than the baseline.