Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs

📅 2025-04-15
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Task Arithmetic (TA) exhibits limited performance in multi-task fusion for large language models (LLMs), primarily due to insufficient linearity assumptions at the full-model level. Method: We observe that individual model submodules—particularly attention and MLP layers—exhibit significantly higher intrinsic linearity than the global model, and we leverage this as a novel structural prior. Accordingly, we propose Submodule-Level Linear Weighted Merging (SLWM): a closed-form, fine-tuning-free merging method that derives optimal weights per submodule via statistical linearity analysis. Contribution/Results: SLWM extends the TA framework and achieves substantial gains across diverse LLM scales (7B–70B) and multi-task settings (e.g., instruction following + code generation + mathematical reasoning). It consistently outperforms standard TA and other baselines, markedly improving both multi-task generalization and merging stability without additional training or hyperparameter tuning.

Technology Category

Application Category

📝 Abstract
Task arithmetic is a straightforward yet highly effective strategy for model merging, enabling the resultant model to exhibit multi-task capabilities. Recent research indicates that models demonstrating linearity enhance the performance of task arithmetic. In contrast to existing methods that rely on the global linearization of the model, we argue that this linearity already exists within the model's submodules. In particular, we present a statistical analysis and show that submodules (e.g., layers, self-attentions, and MLPs) exhibit significantly higher linearity than the overall model. Based on these findings, we propose an innovative model merging strategy that independently merges these submodules. Especially, we derive a closed-form solution for optimal merging weights grounded in the linear properties of these submodules. Experimental results demonstrate that our method consistently outperforms the standard task arithmetic approach and other established baselines across different model scales and various tasks. This result highlights the benefits of leveraging the linearity of submodules and provides a new perspective for exploring solutions for effective and practical multi-task model merging.
Problem

Research questions and friction points this paper is trying to address.

Enhancing task arithmetic performance via submodule linearity in LLMs
Proposing a model merging strategy using submodules' linear properties
Improving multi-task capabilities through independent submodule merging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages submodule linearity for merging
Independent merging of model submodules
Closed-form solution for optimal weights
🔎 Similar Papers
No similar papers found.
R
Rui Dai
National Engineering Laboratory for Brain-Inspired Intelligence Technology and Application, University of Science and Technology of China
S
Sile Hu
Independent Researcher
X
Xu Shen
Independent Researcher
Y
Yonggang Zhang
Hong Kong Baptist University
Xinmei Tian
Xinmei Tian
University of Science and Technology of China
Multimedia Information Retrieval
J
Jieping Ye
Independent Researcher