๐ค AI Summary
To address the CPU memory capacity bottleneck in fine-tuning long-context large language models (LLMs), this work proposes a co-design optimization framework leveraging Compute Express Link (CXL) memory expansion. Methodologically, it introduces a CXL-aware hierarchical memory allocation scheme, coordinates multi-accelerator-card (AIC) deployment to alleviate bandwidth contention, and integrates CPU offloading, concurrency control, and optimized data transfer scheduling. The key contribution is the first deep integration of CXL memory into the LLM training memory stackโenabling transparent, high-efficiency extension of CPU main memory. Experimental results on fine-tuning a 100-layer model with 128K context demonstrate that the approach scales memory capacity by 3.2ร over conventional solutions, improves memory bandwidth utilization by 41%, and reduces end-to-end training latency by 37%. These gains significantly enhance scalability and system efficiency for long-context LLM fine-tuning.
๐ Abstract
The growing prevalence of Large Language Models (LLMs) and their substantial memory requirements have prompted renewed interest in CPU offloading as a method to compensate for limited GPU memory. In particular, when CPU memory is leveraged to temporarily store intermediate states of LLMs, CPU memory becomes a new bottleneck and soon reaches the capacity limitation of commodity CPUs. In this work, we investigate the effectiveness of Compute Express Link (CXL) add-in card (AIC) memory as an extension to CPU memory, enabling larger model sizes and longer context lengths during fine-tuning. Through extensive benchmarking, this study quantifies the performance overhead introduced by transferring data between CXL memory, CPU, and GPUs, focusing on how concurrency and data volume influence bandwidth utilization and latency. This study also compares CPUbased optimizer steps when model parameters, gradients, and optimizer states reside in local memory versus CXL memory, revealing that naive adoption of CXL often degrades performance during the optimizer phase. To overcome these challenges, this study proposes a CXL-aware allocation to strategically partition CPU offloading workloads across both local and CXL memory. This study further demonstrates that employing multiple AICs significantly reduces bandwidth contention, thus improving scalability. Experimental results show that these optimizations enable efficient long-context LLM fine-tuning, underscoring CXL as a promising avenue for unlocking the full potential of CPU offloading in long-context LLM fine-tuning.