🤖 AI Summary
Existing LoRA initialization methods—such as gradient-based or SVD-based approaches—lack theoretical guarantees for maximizing the expected gradient signal, limiting fine-tuning efficiency. To address this, we propose EVA (Explained Variance-based Initialization), the first method to theoretically establish that initializing LoRA parameters along dominant directions of activation variance maximizes the expected gradient signal. EVA employs incremental SVD to extract high-variance activation directions, yielding optimal low-rank initialization for LoRA weights, and further supports dynamic, adaptive rank allocation. Extensive experiments across diverse tasks—including language generation and understanding, image classification, and reinforcement learning—demonstrate that EVA significantly accelerates convergence (achieving state-of-the-art average performance) while reducing trainable parameters by 30%–50%, thereby jointly improving both efficiency and effectiveness.
📝 Abstract
Foundation models (FMs) are pre-trained on large-scale datasets and then fine-tuned for a specific downstream task. The most common fine-tuning method is to update pretrained weights via low-rank adaptation (LoRA). Existing initialization strategies for LoRA often rely on singular value decompositions (SVD) of gradients or weight matrices. However, they do not provably maximize the expected gradient signal, which is critical for fast adaptation. To this end, we introduce Explained Variance Adaptation (EVA), an initialization scheme that uses the directions capturing the most activation variance, provably maximizing the expected gradient signal and accelerating fine-tuning. EVA performs incremental SVD on minibatches of activation vectors and selects the right-singular vectors for initialization once they converged. Further, by selecting the directions that capture the most activation-variance for a given rank budget, EVA accommodates adaptive ranks that reduce the number of trainable parameters, while maintaining or improving downstream performance. We apply EVA to a variety of fine-tuning tasks as language generation and understanding, image classification, and reinforcement learning. EVA exhibits faster convergence than competitors and achieves the highest average score across a multitude of tasks per domain while reducing the number of trainable parameters through rank redistribution.