🤖 AI Summary
To address catastrophic forgetting in class-incremental learning (CIL) caused by parameter drift and adapter misalignment during inference in pretrained models, this paper proposes the Model Surgery (MOS) framework. MOS dynamically implants lightweight task-specific adapters onto a frozen pretrained backbone and jointly optimizes training and inference. Its key contributions are: (1) a novel adapter fusion training strategy that mitigates parameter drift by regularizing adapter updates via shared fusion weights; and (2) a training-free self-refining adapter retrieval method that enhances inference-time adapter selection accuracy through iterative similarity-based refinement. This dual-path design systematically suppresses forgetting by jointly constraining parameter updates and improving adapter matching. MOS achieves state-of-the-art performance across seven standard CIL benchmarks, outperforming all existing methods. The implementation is publicly available.
📝 Abstract
Class-Incremental Learning (CIL) requires models to continually acquire knowledge of new classes without forgetting old ones. Despite Pre-trained Models (PTMs) have shown excellent performance in CIL, catastrophic forgetting still occurs as the model learns new concepts. Existing work seeks to utilize lightweight components to adjust the PTM, while the forgetting phenomenon still comes from {em parameter and retrieval} levels. Specifically, iterative updates of the model result in parameter drift, while mistakenly retrieving irrelevant modules leads to the mismatch during inference. To this end, we propose MOdel Surgery (MOS) to rescue the model from forgetting previous knowledge. By training task-specific adapters, we continually adjust the PTM to downstream tasks. To mitigate parameter-level forgetting, we present an adapter merging approach to learn task-specific adapters, which aims to bridge the gap between different components while reserve task-specific information. Besides, to address retrieval-level forgetting, we introduce a training-free self-refined adapter retrieval mechanism during inference, which leverages the model's inherent ability for better adapter retrieval. By jointly rectifying the model with those steps, MOS can robustly resist catastrophic forgetting in the learning process. Extensive experiments on seven benchmark datasets validate MOS's state-of-the-art performance. Code is available at: https://github.com/sun-hailong/AAAI25-MOS