Preference-Aligned LoRA Merging: Preserving Subspace Coverage and Addressing Directional Anisotropy

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the degradation in fidelity and generalization commonly observed in multi-task LoRA merging, which stems from insufficient subspace coverage and directional anisotropy that suppress critical task-specific directions. To mitigate these issues, the authors propose Task- and Rank-Anisotropy-aware Merging (TARA-Merging), a novel approach that jointly models subspace coverage and directional anisotropy for the first time. By introducing a preference-weighted pseudo-loss, TARA-Merging reweights merging coefficients at the directional level, effectively preserving task-relevant subspaces while balancing the influence across directions. The method significantly enhances both the robustness and faithfulness of merged models, consistently outperforming existing baselines across eight vision and six natural language inference benchmarks, thereby improving generalization and task performance consistency.
📝 Abstract
Merging multiple Low-Rank Adaptation (LoRA) modules is promising for constructing general-purpose systems, yet challenging because LoRA update directions span different subspaces and contribute unevenly. When merged naively, such mismatches can weaken the directions most critical to certain task losses while overemphasizing relatively less important ones, ultimately reducing the model's ability to represent all tasks faithfully. We revisit this problem through two perspectives: subspace coverage, which captures how broadly LoRA directions cover diverse representational directions, and anisotropy, which reflects the imbalance of influence across those directions. We propose TARA-Merging (Task-Rank Anisotropy Alignment), which aligns merging weights using a preference-weighted cross-entropy pseudo-loss while preserving task-relevant LoRA subspaces. This ensures broad subspace coverage and mitigates anisotropy via direction-wise reweighting. Across eight vision and six NLI benchmarks, TARA-Merging consistently outperforms vanilla and LoRA-aware baselines, demonstrating strong robustness and generalization, and highlighting the importance of addressing both subspace coverage and anisotropy in LoRA merging.
Problem

Research questions and friction points this paper is trying to address.

LoRA merging
subspace coverage
directional anisotropy
low-rank adaptation
task representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoRA merging
subspace coverage
directional anisotropy
TARA-Merging
preference-aligned adaptation
🔎 Similar Papers
No similar papers found.