๐ค AI Summary
To address context-agnostic behavior and inefficient inference caused by static LoRA adapter fusion in multi-task settings, this paper proposes a sentence-level dynamic LoRA fusion mechanism. Unlike conventional token-level approaches, our method introduces a lightweight mini-MLP gating module (5M parameters) combined with top-p sampling to dynamically weight and fuse multiple LoRA adapters at the sentence levelโenabling context-aware adaptation and parallelizable inference. The approach is fully compatible with the PEFT framework and requires no modification to the base model. Evaluated on 26 multi-task benchmarks, it achieves an average accuracy of 92.34% on multiple-choice tasks and significant improvements in BLEU and ROUGE scores, while keeping inference latency within twice that of a single LoRA. Its core contribution is the first lightweight, plug-and-play architecture supporting sentence-granular, context-driven, low-overhead dynamic LoRA fusion.
๐ Abstract
Recent advancements in Large Language Models (LLMs) have achieved robust performance across diverse tasks, but fine-tuning these models for specific domains remains resource-intensive. Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation (LoRA) address this challenge by fine-tuning a small subset of parameters. However, existing methods for fusing multiple LoRAs lack dynamic fusion based on contextual inputs and often increase inference time due to token-level operations. We propose DLP-LoRA, a Dynamic Lightweight Plugin that employs a mini-MLP module with only 5M parameters to dynamically fuse multiple LoRAs at the sentence level using top-p sampling strategies. This approach reduces inference time to less than twice that of single LoRA inference by leveraging parallel computation. Evaluations across 26 tasks-including multiple-choice questions and question answering-demonstrate that DLP-LoRA achieves an average accuracy of 92.34% on multiple-choice datasets and significant improvements in BLEU and ROUGE scores on QA datasets, outperforming different LLMs backbones under composite task settings. DLP-LoRA effectively balances performance and efficiency, making it a practical solution for dynamic multi-task adaptation in LLMs. Our code is available at https://github.com/MeCuping/DLP-LoRA.