LoRAFusion: Efficient LoRA Fine-Tuning for LLMs

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LoRA fine-tuning systems suffer from two key bottlenecks: (1) redundant memory accesses induced by large activation tensors, resulting in high runtime overhead; and (2) sequential execution of multiple LoRA adapters on shared GPUs, lacking coordinated scheduling to eliminate pipeline bubbles, optimize communication–computation overlap, and ensure load balancing. This paper introduces LoRAFusion—the first efficient system designed for concurrent multi-LoRA fine-tuning. Its core innovations include computation-graph–driven kernel fusion for memory operations, drastically reducing memory redundancy; and dependency-aware bin-packing for multi-task grouping coupled with adaptive micro-batch scheduling, enhancing GPU utilization and parallel efficiency. Experiments demonstrate that LoRAFusion achieves 1.96× end-to-end speedup over Megatron-LM and outperforms mLoRA by 1.29×, while its fused kernels are plug-and-play, offering both generality and high performance.

Technology Category

Application Category

📝 Abstract
Low-Rank Adaptation (LoRA) has become the leading Parameter-Efficient Fine-Tuning (PEFT) method for Large Language Models (LLMs), as it significantly reduces GPU memory usage while maintaining competitive fine-tuned model quality on downstream tasks. Despite these benefits, we identify two key inefficiencies in existing LoRA fine-tuning systems. First, they incur substantial runtime overhead due to redundant memory accesses on large activation tensors. Second, they miss the opportunity to concurrently fine-tune multiple independent LoRA adapters that share the same base model on the same set of GPUs. This leads to missed performance gains such as reduced pipeline bubbles, better communication overlap, and improved GPU load balance. To address these issues, we introduce LoRAFusion, an efficient LoRA fine-tuning system for LLMs. At the kernel level, we propose a graph-splitting method that fuses memory-bound operations. This design eliminates unnecessary memory accesses and preserves the performance of compute-bound GEMMs without incurring the cost of recomputation or synchronization. At the scheduling level, LoRAFusion introduces an adaptive batching algorithm for multi-job fine-tuning. It first splits LoRA adapters into groups to intentionally stagger batch execution across jobs, and then solves a bin-packing problem within each group to generate balanced, dependency-aware microbatches. LoRAFusion achieves up to $1.96 imes$ ($1.47 imes$ on average) end-to-end speedup compared to Megatron-LM, and up to $1.46 imes$ ($1.29 imes$ on average) improvement over mLoRA, the state-of-the-art multi-LoRA fine-tuning system. Our fused kernel achieves up to $1.39 imes$ ($1.27 imes$ on average) kernel performance improvement and can directly serve as a plug-and-play replacement in existing LoRA systems. We open-source LoRAFusion at https://github.com/CentML/lorafusion.
Problem

Research questions and friction points this paper is trying to address.

Reduces runtime overhead from redundant memory accesses on large activations
Enables concurrent fine-tuning of multiple LoRA adapters sharing base model
Improves GPU utilization by addressing pipeline bubbles and communication inefficiencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses memory-bound operations via graph-splitting method
Uses adaptive batching algorithm for multi-job fine-tuning
Splits LoRA adapters into groups for balanced microbatches