🤖 AI Summary
Existing open-source frameworks lack native support for training large language models on ultra-long sequences (>32K tokens), suffer from single-GPU memory constraints, and exhibit low multi-GPU collaboration efficiency. Method: We propose the first attention-agnostic unified memory optimization framework, compatible with both single- and multi-GPU deployments, enabling out-of-the-box training on sequences ranging from 1M to 10M tokens while fully integrating with the Hugging Face ecosystem. Our approach synergistically combines DeepSpeed and Ulysses sequence parallelism, augmented with gradient checkpointing, memory-aware tensor sharding, and dynamic KV cache compression. Contribution/Results: Experiments show that Llama-8B achieves 500K-sequence training on a single H100 (80GB), scales to 3.7M on an 8×H100 node, and exceeds 15M on a 4-node cluster—surpassing the 32K baseline by over 400×—thereby substantially alleviating hardware bottlenecks in ultra-long-context training.
📝 Abstract
Long sequences are critical for applications like RAG, long document summarization, multi-modality, etc., and modern LLMs, like Llama 4 Scout, support max sequence length of up to 10 million tokens. However, outside of enterprise labs, long sequence training is challenging for the AI community with limited system support in the open-source space. Out-of-box, even on a modern NVIDIA H100 80GB GPU cluster, training Llama 8B model with sequence over 32K runs out of memory on a basic Hugging Face (HF) model due to two reasons: i) LLM training workloads are not optimized to fully leverage a single GPU memory, ii) existing solutions for leveraging multiple GPU memory are not easily available to HF models, making long sequence training inaccessible. We address this with Arctic Long Sequence Training (ALST). It offers a combination of attention-agnostic single GPU and multi-GPU memory optimizations, that enables it to support out-of-box training of multi-million sequence length for a wide variety of HF models. ALST supports training Meta's Llama 8B model with 500K sequence length on a single H100 GPU, 3.7M on a single 8xH100 GPU node, and over 15M on a 4 node cluster, an increase of over 400x compared to the 32K baseline for the latter. ALST is fully compatible with HF models and open-sourced via Deepspeed https://www.deepspeed.ai/tutorials/ulysses-alst-sequence-pallellism/ and Arctic Training https://github.com/snowflakedb/ArcticTraining/blob/main/projects/sequence-parallelism/README.md.