🤖 AI Summary
To address performance bottlenecks in multi-GPU LLM inference caused by communication overhead in model parallelism, this paper proposes the Ladder Residual architecture: a novel residual design featuring hierarchical computation paths that naturally decouple and overlap communication with computation. Our approach establishes the first co-design paradigm integrating architectural innovation with system-level communication optimization—requiring no modifications to underlying distributed frameworks. We adapt Transformer under Tensor Parallelism by incorporating gradient-compatible ladder residuals, hierarchical communication scheduling, and lightweight fine-tuning adaptation. Experiments demonstrate a 30% end-to-end inference speedup for a 70B model deployed across eight GPUs with tensor parallelism; 1B/3B models trained from scratch achieve accuracy on par with standard Transformers; and the Llama-3.1 8B model retains near-lossless accuracy after partial conversion and fine-tuning on only 3B tokens.
📝 Abstract
Large language model inference is both memory-intensive and time-consuming, often requiring distributed algorithms to efficiently scale. Various model parallelism strategies are used in multi-gpu training and inference to partition computation across multiple devices, reducing memory load and computation time. However, using model parallelism necessitates communication of information between GPUs, which has been a major bottleneck and limits the gains obtained by scaling up the number of devices. We introduce Ladder Residual, a simple architectural modification applicable to all residual-based models that enables straightforward overlapping that effectively hides the latency of communication. Our insight is that in addition to systems optimization, one can also redesign the model architecture to decouple communication from computation. While Ladder Residual can allow communication-computation decoupling in conventional parallelism patterns, we focus on Tensor Parallelism in this paper, which is particularly bottlenecked by its heavy communication. For a Transformer model with 70B parameters, applying Ladder Residual to all its layers can achieve 30% end-to-end wall clock speed up at inference time with TP sharding over 8 devices. We refer the resulting Transformer model as the Ladder Transformer. We train a 1B and 3B Ladder Transformer from scratch and observe comparable performance to a standard dense transformer baseline. We also show that it is possible to convert parts of the Llama-3.1 8B model to our Ladder Residual architecture with minimal accuracy degradation by only retraining for 3B tokens.