Liger Kernel: Efficient Triton Kernels for LLM Training

📅 2024-10-14
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost, excessive memory consumption, and low throughput in large language model (LLM) training, this work introduces the first lightweight, modular Triton kernel library specifically designed for LLM training. Our approach synergistically enhances compute and memory-access efficiency via operator fusion, input chunking, and GPU memory optimization. We further develop an automated benchmarking and convergence-validation framework compatible across GPU architectures, ensuring performance gains without compromising model accuracy. Experiments on mainstream LLM architectures demonstrate an average 20% improvement in training throughput and a 60% reduction in GPU memory footprint compared to the standard Hugging Face implementation—while fully preserving training convergence. This work provides an open-source, hardware-adapted, and reproducible infrastructure for efficient LLM training.

Technology Category

Application Category

📝 Abstract
Training Large Language Models (LLMs) efficiently at scale presents a formidable challenge, driven by their ever-increasing computational demands and the need for enhanced performance. In this work, we introduce Liger-Kernel, an open-sourced set of Triton kernels developed specifically for LLM training. With kernel optimization techniques like kernel operation fusing and input chunking, our kernels achieve on average a 20% increase in training throughput and a 60% reduction in GPU memory usage for popular LLMs compared to HuggingFace implementations. In addition, Liger-Kernel is designed with modularity, accessibility, and adaptability in mind, catering to both casual and expert users. Comprehensive benchmarks and integration tests are built in to ensure compatibility, performance, correctness, and convergence across diverse computing environments and model architectures. The source code is available under a permissive license at: github.com/linkedin/Liger-Kernel.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Computational Resources
Training Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Triton Kernel Optimization
Large Language Model Training
GPU Memory Efficiency