Sparse High Rank Adapters

πŸ“… 2024-06-19
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 2
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
To address performance degradation and concept forgetting caused by rapid switching among multiple LoRA modules, this paper proposes Sparse High-Rank Adapters (SHiRA)β€”a parameter-efficient, zero-inference-overhead adaptation paradigm. SHiRA fine-tunes only 1–2% of the base model’s parameters, enabling millisecond-scale adapter switching via structured sparse weight updates and fusion-aware training, while supporting collaborative multi-adapter fusion. Theoretical analysis elucidates how high sparsity enhances multi-task synergy. SHiRA is fully compatible with mainstream LLMs and LVMs without requiring modifications to inference engines. Experiments demonstrate that SHiRA consistently outperforms LoRA across multiple large models: it reduces GPU memory peak usage by 16%, accelerates CPU loading speed by 5–16Γ—, and achieves training efficiency comparable to LoRA.

Technology Category

Application Category

πŸ“ Abstract
Low Rank Adaptation (LoRA) has gained massive attention in the recent generative AI research. One of the main advantages of LoRA is its ability to be fused with pretrained models, adding no overhead during inference. However, from a mobile deployment standpoint, we can either avoid inference overhead in the fused mode but lose the ability to switch adapters rapidly, or suffer significant (up to 30% higher) inference latency while enabling rapid switching in the unfused mode. LoRA also exhibits concept-loss when multiple adapters are used concurrently. In this paper, we propose Sparse High Rank Adapters (SHiRA), a new paradigm which incurs no inference overhead, enables rapid switching, and significantly reduces concept-loss. Specifically, SHiRA can be trained by directly tuning only 1-2% of the base model weights while leaving others unchanged. This results in a highly sparse adapter which can be switched directly in the fused mode. We further provide theoretical and empirical insights on how high sparsity in SHiRA can aid multi-adapter fusion by reducing concept loss. Our extensive experiments on LVMs and LLMs demonstrate that finetuning only a small fraction of the parameters in the base model significantly outperforms LoRA while enabling both rapid switching and multi-adapter fusion. Finally, we provide a latency- and memory-efficient SHiRA implementation based on Parameter-Efficient Finetuning (PEFT) Library which trains at nearly the same speed as LoRA while consuming up to 16% lower peak GPU memory, thus making SHiRA easy to adopt for practical use cases. To demonstrate rapid switching benefits during inference, we show that loading SHiRA on a base model can be 5x-16x faster than LoRA fusion on a CPU.
Problem

Research questions and friction points this paper is trying to address.

Adaptive Low-Rank Approximation
Parameter Switching in AI Models
Efficient Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse High-Rank Adapter
Parameter-Efficient Fine-Tuning
Memory-Optimized Switching
πŸ”Ž Similar Papers
No similar papers found.