LoRA on the Go: Instance-level Dynamic LoRA Selection and Merging

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional LoRA adapters are designed for single-task scenarios and struggle with diverse, unpredictable inputs in real-world applications. Existing multi-LoRA fusion approaches rely on labeled data or task-specific fine-tuning, incurring high scalability costs. This paper proposes a training-free, annotation-free, instance-level dynamic LoRA selection and fusion framework: it extracts semantic signals via a single forward pass, dynamically evaluates the relevance of each LoRA to the current input, and performs weighted integration—fully compatible with diverse model architectures. Evaluated across five NLP benchmarks, 27 datasets, and three large language model families, our method achieves up to 3.6% absolute improvement over supervised baselines on certain tasks while maintaining efficient inference. Its core innovation lies in the first realization of fully training-free and annotation-free dynamic adapter ensemble, enabling real-time, input-adaptive specialization without any parameter updates or human supervision.

Technology Category

Application Category

📝 Abstract
Low-Rank Adaptation (LoRA) has emerged as a parameter-efficient approach for fine-tuning large language models.However, conventional LoRA adapters are typically trained for a single task, limiting their applicability in real-world settings where inputs may span diverse and unpredictable domains. At inference time, existing approaches combine multiple LoRAs for improving performance on diverse tasks, while usually requiring labeled data or additional task-specific training, which is expensive at scale. In this work, we introduce LoRA on the Go (LoGo), a training-free framework that dynamically selects and merges adapters at the instance level without any additional requirements. LoGo leverages signals extracted from a single forward pass through LoRA adapters, to identify the most relevant adapters and determine their contributions on-the-fly. Across 5 NLP benchmarks, 27 datasets, and 3 model families, LoGo outperforms training-based baselines on some tasks upto a margin of 3.6% while remaining competitive on other tasks and maintaining inference throughput, highlighting its effectiveness and practicality.
Problem

Research questions and friction points this paper is trying to address.

Dynamic selection of relevant LoRA adapters for diverse input domains
Merging multiple LoRA adapters without requiring labeled training data
Maintaining inference efficiency while handling unpredictable task domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic LoRA selection and merging at instance level
Training-free framework using single forward pass signals
On-the-fly adapter contribution determination without retraining
🔎 Similar Papers
No similar papers found.