ICM-Fusion: In-Context Meta-Optimized LoRA Fusion for Multi-Task Adaptation

๐Ÿ“… 2025-08-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address domain forgetting and weight interference in pre-trained LoRA multi-task fusion, this paper proposes Meta-CA: a collaborative LoRA optimization framework integrating meta-learning and in-context adaptation. Methodologically, it introduces task vector arithmetic to encode task-specific knowledge, employs manifold projection to dynamically align cross-domain optimization directions, and incorporates an F-VAE to reconstruct and regularize fused weights in latent spaceโ€”thereby enforcing geometric consistency among task vectors. The framework supports heterogeneous model architectures and enables few-shot multi-task weight fusion without additional parameters. Empirically, Meta-CA achieves a 12.7% reduction in average loss across vision and language multi-task benchmarks. It maintains robust task enhancement under long-tailed and few-shot settings, demonstrating strong generalization capability and broad model compatibility.

Technology Category

Application Category

๐Ÿ“ Abstract
Enabling multi-task adaptation in pre-trained Low-Rank Adaptation (LoRA) models is crucial for enhancing their generalization capabilities. Most existing pre-trained LoRA fusion methods decompose weight matrices, sharing similar parameters while merging divergent ones. However, this paradigm inevitably induces inter-weight conflicts and leads to catastrophic domain forgetting. While incremental learning enables adaptation to multiple tasks, it struggles to achieve generalization in few-shot scenarios. Consequently, when the weight data follows a long-tailed distribution, it can lead to forgetting in the fused weights. To address this issue, we propose In-Context Meta LoRA Fusion (ICM-Fusion), a novel framework that synergizes meta-learning with in-context adaptation. The key innovation lies in our task vector arithmetic, which dynamically balances conflicting optimization directions across domains through learned manifold projections. ICM-Fusion obtains the optimal task vector orientation for the fused model in the latent space by adjusting the orientation of the task vectors. Subsequently, the fused LoRA is reconstructed by a self-designed Fusion VAE (F-VAE) to realize multi-task LoRA generation. We have conducted extensive experiments on visual and linguistic tasks, and the experimental results demonstrate that ICM-Fusion can be adapted to a wide range of architectural models and applied to various tasks. Compared to the current pre-trained LoRA fusion method, ICM-Fusion fused LoRA can significantly reduce the multi-tasking loss and can even achieve task enhancement in few-shot scenarios.
Problem

Research questions and friction points this paper is trying to address.

Addresses inter-weight conflicts in multi-task LoRA fusion
Prevents catastrophic domain forgetting in fused LoRA models
Enhances few-shot generalization for long-tailed weight distributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learning with in-context adaptation
Dynamic task vector arithmetic balancing
Fusion VAE for multi-task LoRA generation
๐Ÿ”Ž Similar Papers
No similar papers found.