Training Ultra Long Context Language Model with Fully Pipelined Distributed Transformer

📅 2024-08-30
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
To address the high GPU memory consumption and hardware requirements in training large language models (LLMs) with ultra-long contexts, this paper proposes a fully pipelined distributed Transformer architecture. Its core innovation is a novel sequence-chunking pipeline mechanism that extends maximum sequence length by 16× without modifying the model architecture, while maintaining full compatibility with existing training techniques—including pipeline parallelism, FP16/BF16 mixed-precision training, and computational graph reordering. We successfully train an 8B-parameter model on just four GPUs to handle sequences up to 2 million tokens, achieving a sustained model FLOPs utilization (MFU) of over 55%, significantly outperforming state-of-the-art approaches. This method substantially reduces hardware dependency and resource costs for long-context LLM training, establishing a general, scalable, and efficient paradigm for ultra-long-context LLM training.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) with long context capabilities are integral to complex tasks in natural language processing and computational biology, such as text generation and protein sequence analysis. However, training LLMs directly on extremely long contexts demands considerable GPU resources and increased memory, leading to higher costs and greater complexity. Alternative approaches that introduce long context capabilities via downstream finetuning or adaptations impose significant design limitations. In this paper, we propose Fully Pipelined Distributed Transformer (FPDT) for efficiently training long-context LLMs with extreme hardware efficiency. For GPT and Llama models, we achieve a 16x increase in sequence length that can be trained on the same hardware compared to current state-of-the-art solutions. With our dedicated sequence chunk pipeline design, we can now train 8B LLM with 2 million sequence length on only 4 GPUs, while also maintaining over 55% of MFU. Our proposed FPDT is agnostic to existing training techniques and is proven to work efficiently across different LLM models.
Problem

Research questions and friction points this paper is trying to address.

Training long-context LLMs requires excessive GPU resources and memory
Existing adaptation methods impose significant design limitations
Efficiently scaling sequence length for LLMs without hardware expansion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fully Pipelined Distributed Transformer for efficiency
16x longer sequence length on same hardware
Trains 8B LLM with 2M length on 4 GPUs
🔎 Similar Papers
No similar papers found.
J
Jinghan Yao
The Ohio State University
S
Sam Ade Jacobs
Microsoft Inc.
M
Masahiro Tanaka
Microsoft Inc.
Olatunji Ruwase
Olatunji Ruwase
Microsoft Research
Deep LearningOperating SystemsProgramming LanguagesComputer Architecture
A
A. Shafi
The Ohio State University
H
H. Subramoni
The Ohio State University
Dhabaleswar K. Panda
Dhabaleswar K. Panda
Professor of Computer Science, The Ohio State University
High Performance ComputingDeep LearningBig DataCloud ComputingExascale Computing