An Efficient Heterogeneous Co-Design for Fine-Tuning on a Single GPU

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high memory overhead of large language model (LLM) fine-tuning, which hinders efficient execution on a single consumer-grade GPU. To overcome this challenge, the authors propose SlideFormer, a system leveraging heterogeneous co-design for efficient fine-tuning of ultra-large models. Its key innovations include a lightweight asynchronous engine, a CPU-GPU sliding window collaboration mechanism, efficient heterogeneous memory management, optimized Triton kernels, and multi-level storage I/O integration. Experimental results demonstrate that SlideFormer enables fine-tuning of models with over 123 billion parameters on a single RTX 4090 GPU, reducing memory consumption by 50%, increasing batch size by 8×, supporting 6× larger models, and achieving 1.40–6.27× higher throughput compared to baselines, while maintaining over 95% of peak performance on both NVIDIA and AMD GPUs.

Technology Category

Application Category

📝 Abstract
Fine-tuning Large Language Models (LLMs) has become essential for domain adaptation, but its memory-intensive property exceeds the capabilities of most GPUs. To address this challenge and democratize LLM fine-tuning, we present SlideFormer, a novel system designed for single-GPU environments. Our innovations are: (1) A lightweight asynchronous engine that treats the GPU as a sliding window and overlaps GPU computation with CPU updates and multi-tier I/O. (2) A highly efficient heterogeneous memory management scheme significantly reduces peak memory usage. (3) Optimized Triton kernels to solve key bottlenecks and integrated advanced I/O. This collaborative design enables fine-tuning of the latest 123B+ models on a single RTX 4090, supporting up to 8x larger batch sizes and 6x larger models. In evaluations, SlideFormer achieves 1.40x to 6.27x higher throughput while roughly halving CPU/GPU memory usage compared to baselines, sustaining >95% peak performance on both NVIDIA and AMD GPUs.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
fine-tuning
memory-intensive
single GPU
heterogeneous computing
Innovation

Methods, ideas, or system contributions that make the work stand out.

heterogeneous co-design
single-GPU fine-tuning
asynchronous engine
memory management
Triton kernels
🔎 Similar Papers
No similar papers found.
R
Ruijia Yang
Hong Kong University of Science and Technology (Guangzhou)
Zeyi Wen
Zeyi Wen
Assistant Professor at HKUST(Guangzhou)
Efficient LLMsMLSysHPOHPC