๐ค AI Summary
Large multimodal models (LMMs) suffer from static pretraining knowledge that impedes continual updating and is prone to catastrophic forgetting. To address this, we propose KOREโa method that jointly optimizes knowledge adaptation and retention. First, new knowledge items are structured as triples and injected into multimodal inputs. Second, the covariance matrix of historical task linear-layer activations is explicitly leveraged to model the distribution of prior knowledge. Third, adapter parameters are initialized via null-space projection, constraining fine-tuning directions to minimize interference with retained knowledge. Evaluated on LLaVA and Qwen2.5-VL, KORE achieves a +12.3% gain in new-knowledge injection accuracy while reducing forgetting to just 37% of the baselineโs rate. To our knowledge, KORE is the first approach to unify efficient knowledge updating with robust knowledge retention in LMMs.
๐ Abstract
Large Multimodal Models encode extensive factual knowledge in their pre-trained weights. However, its knowledge remains static and limited, unable to keep pace with real-world developments, which hinders continuous knowledge acquisition. Effective knowledge injection thus becomes critical, involving two goals: knowledge adaptation (injecting new knowledge) and knowledge retention (preserving old knowledge). Existing methods often struggle to learn new knowledge and suffer from catastrophic forgetting. To address this, we propose KORE, a synergistic method of KnOwledge-oRientEd augmentations and constraints for injecting new knowledge into large multimodal models while preserving old knowledge. Unlike general text or image data augmentation, KORE automatically converts individual knowledge items into structured and comprehensive knowledge to ensure that the model accurately learns new knowledge, enabling accurate adaptation. Meanwhile, KORE stores previous knowledge in the covariance matrix of LMM's linear layer activations and initializes the adapter by projecting the original weights into the matrix's null space, defining a fine-tuning direction that minimizes interference with previous knowledge, enabling powerful retention. Extensive experiments on various LMMs, including LLaVA-v1.5-7B, LLaVA-v1.5-13B, and Qwen2.5-VL-7B, show that KORE achieves superior new knowledge injection performance and effectively mitigates catastrophic forgetting.