🤖 AI Summary
To address catastrophic forgetting in continual learning of vision-language models (VLMs), this paper proposes a dynamic-rank LoRA method that requires no reference data and modifies neither the inference architecture nor pretrained weights. The method incrementally integrates new knowledge via low-rank updates while actively enhancing—rather than degrading—pretrained semantic capabilities. Its core contributions are: (1) the first module-level importance estimation mechanism that adaptively allocates LoRA ranks to jointly optimize plasticity and stability; and (2) a reference-free, unsupervised task-boundary awareness capability. Evaluated on multiple VLM continual learning benchmarks, it achieves state-of-the-art performance: average zero-shot transfer accuracy improves by 2.3%, task-sequence accuracy decay decreases by 67%, and inference overhead remains zero.
📝 Abstract
We investigate whether the pre-trained knowledge in vision-language models (VLMs), such as CLIP, can be retained -- or even enhanced -- in continual learning (CL) while incorporating new knowledge from the data stream. Existing CL methods primarily focus on continual downstream adaptation using components isolated from pre-trained model (PTM), increasing inference complexity and limiting improvements to the PTM itself; some also retain knowledge relying on additional reference data, leading to high training costs. To address these limitations, we propose a universal and efficient Continual Learning approach for VLM based on Dynamic Rank-Selective LoRA (CoDyRA), which directly improves the PTMs while preserving the existing knowledge from both pre-training and CL. Through analyses on how LoRA rank and placement impact/regularize the learning and forgetting in CL, we propose CoDyRA that adaptively performs rank-minimized parameter updates in different modules, based on their importance to the current data. This ensures a balance between knowledge acquisition (plasticity) and forgetting mitigation (stability). Our method operates without explicit domain or distribution prediction and does not rely on reference data, enabling seamless task integration while maintaining pre-trained capabilities. Moreover, CoDyRA preserves the original model architecture and deployment pipeline, introducing no additional inference overhead. Extensive experiments demonstrate that our approach enhances representations based on new downstream data while retaining pre-trained knowledge, achieving state-of-the-art results.