🤖 AI Summary
This work addresses the lack of theoretical understanding behind the effectiveness of parameter-efficient fine-tuning (PEFT) methods in large language models. By adopting a linearized perspective on fine-tuning dynamics, the study establishes the first theoretical connection between PEFT and the Neural Tangent Kernel (NTK), introducing an inductive bias based on Euclidean distance in parameter space that renders the fine-tuning process equivalent to learning under a positive-definite NTK. Through spectral analysis combined with LoRA experiments, the authors reveal a strong correlation between the NTK eigen-spectrum and model adaptation performance, and derive perturbation bounds on the NTK spectrum based on the choice of fine-tuned layers. The theoretical predictions align closely with empirical LoRA performance, offering a principled and interpretable foundation for designing efficient PEFT methods.
📝 Abstract
Parameter-Efficient Fine-Tuning (PEFT) is a popular class of techniques that strive to adapt large models in a scalable and resource-efficient manner. Yet, the mechanisms underlying their training performance and generalization remain underexplored. In this paper, we provide several insights into such fine-tuning through the lens of linearization. Fine-tuned models are often implicitly encouraged to remain close to the pretrained model. By making this explicit, using an Euclidean distance inductive bias in parameter space, we show that fine-tuning dynamics become equivalent to learning with the positive-definite neural tangent kernel (NTK). We specifically analyze how close the fully linear and the linearized fine-tuning optimizations are, based on the strength of the regularization. This allows us to be pragmatic about how good a model linearization is when fine-tuning large language models (LLMs). When linearization is a good model, our findings reveal a strong correlation between the eigenvalue spectrum of the NTK and the performance of model adaptation. Motivated by this, we give spectral perturbation bounds on the NTK induced by the choice of layers selected for fine-tuning. We empirically validate our theory on Low Rank Adaptation (LoRA) on LLMs. These insights not only characterize fine-tuning but also have the potential to enhance PEFT techniques, paving the way to better informed and more nimble adaptation in LLMs.