KORE: Enhancing Knowledge Injection for Large Multimodal Models via Knowledge-Oriented Augmentations and Constraints

๐Ÿ“… 2025-10-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large multimodal models (LMMs) suffer from static pretraining knowledge that impedes continual updating and is prone to catastrophic forgetting. To address this, we propose KOREโ€”a method that jointly optimizes knowledge adaptation and retention. First, new knowledge items are structured as triples and injected into multimodal inputs. Second, the covariance matrix of historical task linear-layer activations is explicitly leveraged to model the distribution of prior knowledge. Third, adapter parameters are initialized via null-space projection, constraining fine-tuning directions to minimize interference with retained knowledge. Evaluated on LLaVA and Qwen2.5-VL, KORE achieves a +12.3% gain in new-knowledge injection accuracy while reducing forgetting to just 37% of the baselineโ€™s rate. To our knowledge, KORE is the first approach to unify efficient knowledge updating with robust knowledge retention in LMMs.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Multimodal Models encode extensive factual knowledge in their pre-trained weights. However, its knowledge remains static and limited, unable to keep pace with real-world developments, which hinders continuous knowledge acquisition. Effective knowledge injection thus becomes critical, involving two goals: knowledge adaptation (injecting new knowledge) and knowledge retention (preserving old knowledge). Existing methods often struggle to learn new knowledge and suffer from catastrophic forgetting. To address this, we propose KORE, a synergistic method of KnOwledge-oRientEd augmentations and constraints for injecting new knowledge into large multimodal models while preserving old knowledge. Unlike general text or image data augmentation, KORE automatically converts individual knowledge items into structured and comprehensive knowledge to ensure that the model accurately learns new knowledge, enabling accurate adaptation. Meanwhile, KORE stores previous knowledge in the covariance matrix of LMM's linear layer activations and initializes the adapter by projecting the original weights into the matrix's null space, defining a fine-tuning direction that minimizes interference with previous knowledge, enabling powerful retention. Extensive experiments on various LMMs, including LLaVA-v1.5-7B, LLaVA-v1.5-13B, and Qwen2.5-VL-7B, show that KORE achieves superior new knowledge injection performance and effectively mitigates catastrophic forgetting.
Problem

Research questions and friction points this paper is trying to address.

Injecting new knowledge into large multimodal models while preserving old knowledge
Addressing catastrophic forgetting during knowledge adaptation and retention
Converting individual knowledge items into structured comprehensive knowledge for accurate learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Converts knowledge items into structured knowledge
Stores previous knowledge in covariance matrix
Initializes adapter via null space projection
๐Ÿ”Ž Similar Papers
No similar papers found.
K
Kailin Jiang
University of Science and Technology of China
Hongbo Jiang
Hongbo Jiang
Hunan University
Mobile ComputingWireless NetworkingPrivacy Preserving
N
Ning Jiang
Northeast Forestry University
Z
Zhi Gao
Beijing Institute of Technology
Jinhe Bi
Jinhe Bi
LMU Munich
Efficient AIM/LLM
Yuchen Ren
Yuchen Ren
Renmin University of China
B
Bin Li
University of Science and Technology of China
Yuntao Du
Yuntao Du
Purdue University
Privacy
L
Lei Liu
University of Science and Technology of China
Q
Qing Li
State Key Laboratory of General Artificial Intelligence, BIGAI