Fine-Tuning Attention Modules Only: Enhancing Weight Disentanglement in Task Arithmetic

📅 2024-07-09
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
In task arithmetic, multi-task weight coupling induces interference, degrading both training efficiency and generalization. Method: We propose a novel paradigm that fine-tunes only the attention modules of Transformers—revealing, for the first time, their intrinsic kernel-like behavior. Through systematic analysis, we identify that representation modules facilitate weight decoupling, whereas task-specific heads impede it, thereby establishing a modular decoupling design principle. Contribution/Results: Our method enhances decoupling and zero-shot task generalization without additional training. It significantly outperforms baselines across multiple benchmarks while avoiding the double training overhead required by Neural Tangent Kernel (NTK) linearization. Crucially, it achieves superior weight decoupling and single-task performance, offering a more efficient and effective alternative to existing linearized or fully fine-tuned approaches.

Technology Category

Application Category

📝 Abstract
In recent years, task arithmetic has garnered increasing attention. This approach edits pre-trained models directly in weight space by combining the fine-tuned weights of various tasks into a unified model. Its efficiency and cost-effectiveness stem from its training-free combination, contrasting with traditional methods that require model training on large datasets for multiple tasks. However, applying such a unified model to individual tasks can lead to interference from other tasks (lack of weight disentanglement). To address this issue, Neural Tangent Kernel (NTK) linearization has been employed to leverage a"kernel behavior", facilitating weight disentanglement and mitigating adverse effects from unrelated tasks. Despite its benefits, NTK linearization presents drawbacks, including doubled training costs, as well as reduced performance of individual models. To tackle this problem, we propose a simple yet effective and efficient method that is to finetune the attention modules only in the Transformer. Our study reveals that the attention modules exhibit kernel behavior, and fine-tuning the attention modules only significantly improves weight disentanglement. To further understand how our method improves the weight disentanglement of task arithmetic, we present a comprehensive study of task arithmetic by differentiating the role of the representation module and task-specific module. In particular, we find that the representation module plays an important role in improving weight disentanglement whereas the task-specific modules such as the classification heads can degenerate the weight disentanglement performance. (The code is available at https://github.com/kyrie-23/task_arithmetic_tangent)
Problem

Research questions and friction points this paper is trying to address.

Multi-task Learning
Weighting Strategies
Model Interference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer Attention Module Adjustment
Task Differentiation Enhancement
Representation Module Significance
🔎 Similar Papers
No similar papers found.