🤖 AI Summary
This work investigates how low-rank adaptation (LoRA) parameters influence catastrophic forgetting during fine-tuning. We systematically merge LoRA adapter weights back into the backbone to quantitatively analyze forgetting dynamics across pretraining and downstream tasks, as well as changes in model plasticity. We identify, for the first time, a “contextual forgetting” phenomenon in Vision Transformers (ViTs)—characterized by task-dependent degradation of local features—distinct from the global forgetting observed in ResNets and unreported in prior continual learning literature. Moreover, we reveal that LoRA rank exerts a dual regulatory effect: excessively low ranks exacerbate pretrained knowledge forgetting, while excessively high ranks impair downstream adaptability. Experiments span diverse multi-task continual learning scenarios. Our findings provide theoretical foundations and principled guidelines for rank selection in efficient, sustainable visual model adaptation.
📝 Abstract
Broad, open source availability of large pretrained foundation models on the internet through platforms such as HuggingFace has taken the world of practical deep learning by storm. A classical pipeline for neural network training now typically consists of finetuning these pretrained network on a small target dataset instead of training from scratch. In the case of large models this can be done even on modest hardware using a low rank training technique known as Low-Rank Adaptation (LoRA). While Low Rank training has already been studied in the continual learning setting, existing works often consider storing the learned adapter along with the existing model but rarely attempt to modify the weights of the pretrained model by merging the LoRA with the existing weights after finishing the training of each task. In this article we investigate this setting and study the impact of LoRA rank on the forgetting of the pretraining foundation task and on the plasticity and forgetting of subsequent ones. We observe that this rank has an important impact on forgetting of both the pretraining and downstream tasks. We also observe that vision transformers finetuned in that way exhibit a sort of ``contextual'' forgetting, a behaviour that we do not observe for residual networks and that we believe has not been observed yet in previous continual learning works.