π€ AI Summary
This work addresses the negative transfer commonly observed in multi-task low-rank adaptation (LoRA), where conflicting gradients across tasks degrade performance below that of single-task fine-tuning. To mitigate this issue, the authors propose Ortho-LoRA, which introduces an orthogonal gradient projection mechanism into the LoRA framework for the first time. Leveraging LoRAβs dual-branch low-dimensional subspace structure, Ortho-LoRA dynamically projects conflicting task gradients onto mutually orthogonal complementary subspaces, effectively decoupling inter-task interference. The method incurs negligible computational overhead and achieves substantial improvements over standard multi-task joint training on the GLUE benchmark, closing 95% of the performance gap between multi-task and single-task fine-tuning.
π Abstract
Multi-Task Learning (MTL) combined with Low-Rank Adaptation (LoRA) has emerged as a promising direction for parameter-efficient deployment of Large Language Models (LLMs). By sharing a single adapter across multiple tasks, one can significantly reduce storage overhead. However, this approach suffers from negative transfer, where conflicting gradient updates from distinct tasks degrade the performance of individual tasks compared to single-task fine-tuning. This problem is exacerbated in LoRA due to the low-rank constraint, which limits the optimization landscape's capacity to accommodate diverse task requirements. In this paper, we propose Ortho-LoRA, a gradient projection method specifically tailored for the bipartite structure of LoRA. Ortho-LoRA dynamically projects conflicting task gradients onto the orthogonal complement of each other within the intrinsic LoRA subspace. Extensive experiments on the GLUE benchmark demonstrate that Ortho-LoRA effectively mitigates task interference, outperforming standard joint training and recovering 95\% of the performance gap between multi-task and single-task baselines with negligible computational overhead.