Generative Parameter-Efficient Fine-Tuning

📅 2023-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
How to achieve both parameter and representation efficiency during fine-tuning of large Transformers—without compromising computational cost, memory footprint, or model performance? This paper proposes GIFT, a generative parameter-efficient fine-tuning method that dynamically generates task-specific layer parameters from pretrained weights via lightweight bias-free bilinear networks. Its core innovations are: (i) the first generative adaptation paradigm unifying parameter and representation tuning; and (ii) cross-layer weight sharing, enabling a single compact network to replace per-layer adapters. On Commonsense170k, GIFT achieves a 5.7% accuracy gain over LoRA with 14× fewer parameters; in instruction tuning, it attains a 5.4% higher win rate than LoRA while using only 25% of its parameters—and even slightly outperforms GPT-3.5 Turbo. Extensive experiments across diverse NLP and CV tasks confirm GIFT’s strong generalization and broad applicability.
📝 Abstract
We present Generative Parameter-Efficient Fine-Tuning (GIFT) for adapting pretrained Transformer backbones on downstream tasks. GIFT learns to generate the fine-tuned weights for a layer directly from its pretrained weights. The GIFT network is parameterized in a minimally-simple way by two linear layers (without bias terms), and is shared by different pretrained layers selected for fine-tuning (e.g., the Query layers), which result in significantly fewer trainable parameters compared to the layer-specific methods like Low-Rank Adapter (LoRA). We also show this formulation bridges parameter-efficient fine-tuning and representation fine-tuning. We perform comprehensive experiments on natural language tasks (commonsense and arithmetic reasoning, instruction tuning, and sequence classification) and computer vision tasks (fine-grained classification). We obtain the best performance and parameter efficiency among baselines on commonsense and arithmetic reasoning, and instruction following using the Llama family of models and on visual recognition benchmarks using Vision Transformers. Notably, compared to LoRA, we obtain 5.7% absolute increase in average accuracy with 14 times reduction of parameters on Commonsense170k using Llama-3 (8B), and 5.4% absolute increase in the win rate with 4 times reduction of parameters using Llama-2 (7B) during instruction tuning. Our GIFT also obtains a slightly higher win rate on instruction tuning than GPT 3.5 (Turbo 1106).
Problem

Research questions and friction points this paper is trying to address.

Unifies parameter-efficient and representation-efficient fine-tuning of large models
Generates fine-tuning weights directly from pretrained weights for efficiency
Balances parameter, compute, memory efficiency while maintaining performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates fine-tuning weights from pretrained weights
Uses simple low-rank two-layer linear design
Balances parameter, compute, and memory efficiency
🔎 Similar Papers
No similar papers found.