Parameter Efficient Fine-tuning via Explained Variance Adaptation

📅 2024-10-09
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Existing LoRA initialization methods—such as gradient-based or SVD-based approaches—lack theoretical guarantees for maximizing the expected gradient signal, limiting fine-tuning efficiency. To address this, we propose EVA (Explained Variance-based Initialization), the first method to theoretically establish that initializing LoRA parameters along dominant directions of activation variance maximizes the expected gradient signal. EVA employs incremental SVD to extract high-variance activation directions, yielding optimal low-rank initialization for LoRA weights, and further supports dynamic, adaptive rank allocation. Extensive experiments across diverse tasks—including language generation and understanding, image classification, and reinforcement learning—demonstrate that EVA significantly accelerates convergence (achieving state-of-the-art average performance) while reducing trainable parameters by 30%–50%, thereby jointly improving both efficiency and effectiveness.

Technology Category

Application Category

📝 Abstract
Foundation models (FMs) are pre-trained on large-scale datasets and then fine-tuned for a specific downstream task. The most common fine-tuning method is to update pretrained weights via low-rank adaptation (LoRA). Existing initialization strategies for LoRA often rely on singular value decompositions (SVD) of gradients or weight matrices. However, they do not provably maximize the expected gradient signal, which is critical for fast adaptation. To this end, we introduce Explained Variance Adaptation (EVA), an initialization scheme that uses the directions capturing the most activation variance, provably maximizing the expected gradient signal and accelerating fine-tuning. EVA performs incremental SVD on minibatches of activation vectors and selects the right-singular vectors for initialization once they converged. Further, by selecting the directions that capture the most activation-variance for a given rank budget, EVA accommodates adaptive ranks that reduce the number of trainable parameters, while maintaining or improving downstream performance. We apply EVA to a variety of fine-tuning tasks as language generation and understanding, image classification, and reinforcement learning. EVA exhibits faster convergence than competitors and achieves the highest average score across a multitude of tasks per domain while reducing the number of trainable parameters through rank redistribution.
Problem

Research questions and friction points this paper is trying to address.

Improves LoRA initialization for faster model adaptation
Maximizes gradient signal via activation variance directions
Reduces trainable parameters while maintaining performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses activation variance directions for initialization
Performs incremental SVD on minibatches
Adaptive ranks reduce trainable parameters
🔎 Similar Papers
No similar papers found.