π€ AI Summary
Large language model (LLM) pretraining suffers from prohibitively long training durations and high energy consumption. To address this, we propose LiteSparkβa highly efficient pretraining framework that achieves high model flops utilization (MFU) and cross-model/hardware adaptability without modifying standard Transformer implementations. LiteSpark introduces synergistic architectural optimizations and algorithmic enhancements to both the attention mechanism and MLP layers. Its key contributions are: (1) lightweight structural modifications that preserve full compatibility with both pretraining and fine-tuning workflows; and (2) a fine-grained parallelization strategy tailored for multi-node GPU clusters, significantly improving computational resource utilization. Evaluated on Llama-3B and Llama-30B models, LiteSpark delivers 2β6Γ higher training throughput and reduces energy consumption by 55%β83%. Furthermore, it demonstrates strong scalability and generalization across diverse hardware, including NVIDIA H200-based clusters.
π Abstract
Training Large Language Models (LLMs) is plagued by long training times and massive energy consumption, with modern models requiring months of computation and gigawatt-hours of electricity. In light of these challenges,we introduce Litespark, a novel pre-training framework that addresses these inefficiencies through targeted optimizations to transformer attention and MLP layers. Our approach combines architectural improvements with algorithmic enhancements to maximize Model FLOPs Utilization (MFU) while maintaining compatibility with standard transformer implementations. Comprehensive benchmarking on 3B and 30B parameter Llama models using the SlimPajama-627B dataset demonstrates substantial performance gains: 2x-6x training throughput improvement and $55%-83$% energy consumption reduction across multi-node H200 GPU clusters. These optimizations are model- and hardware-agnostic, enabling broad applicability across transformer architectures and extending to post-training phases including supervised fine-tuning and direct preference optimization.