🤖 AI Summary
To address GPU memory constraints and throughput degradation caused by frequent loading/unloading of large-scale LoRA adapters during real-time serving, this paper proposes a novel “compress-then-serve” paradigm. Methodologically, it introduces (1) a unified compression framework that jointly models LoRA adapters via a shared low-rank basis across adapters and adapter-specific learnable scaling matrices; and (2) an end-to-end differentiable hierarchical LoRA clustering mechanism enabling cooperative compression of up to thousands of adapters. The approach integrates low-rank decomposition, cross-adapter parameter sharing, and learnable clustering optimization. Experiments demonstrate that, under concurrent serving of 1,000 LoRA adapters, the system sustains 80% of the single-adapter throughput, significantly reduces GPU memory footprint, virtually eliminates adapter-switching overhead, maintains stable inference latency, and preserves full task accuracy.
📝 Abstract
Fine-tuning large language models (LLMs) with low-rank adaptations (LoRAs) has become common practice, often yielding numerous copies of the same LLM differing only in their LoRA updates. This paradigm presents challenges for systems that serve real-time responses to queries that each involve a different LoRA. Prior works optimize the design of such systems but still require continuous loading and offloading of LoRAs, as it is infeasible to store thousands of LoRAs in GPU memory. To mitigate this issue, we investigate the efficacy of compression when serving LoRAs. We propose a method for the joint compression of LoRAs into a shared basis paired with LoRA-specific scaling matrices. We extend our algorithm to learn clusters of LoRAs that are amenable to joint compression, allowing it to scale gracefully to large LoRA collections. Our experiments with up to 1000 LoRAs demonstrate that compressed LoRAs preserve performance while offering major throughput gains in realistic serving scenarios with over a thousand LoRAs, maintaining 80% of the throughput of serving a single LoRA.