When Modalities Remember: Continual Learning for Multimodal Knowledge Graphs

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of catastrophic forgetting in dynamic multimodal knowledge graphs, where existing methods struggle to balance learning new knowledge while retaining previously acquired information. The paper formally defines and systematically investigates the problem of continual multimodal knowledge graph reasoning (CMMKGR) for the first time, proposing the MRCKG model. MRCKG integrates a curriculum learning strategy that jointly leverages structural connectivity and multimodal compatibility, a cross-modal knowledge preservation mechanism incorporating entity stability, relational semantic consistency, and modality anchoring, and a two-stage optimized multimodal contrastive replay scheme. Experimental results demonstrate that MRCKG significantly mitigates forgetting and consistently enhances reasoning performance on newly added multimodal knowledge across several newly constructed benchmark datasets.
📝 Abstract
Real-world multimodal knowledge graphs (MMKGs) are dynamic, with new entities, relations, and multimodal knowledge emerging over time. Existing continual knowledge graph reasoning (CKGR) methods focus on structural triples and cannot fully exploit multimodal signals from new entities. Existing multimodal knowledge graph reasoning (MMKGR) methods, however, usually assume static graphs and suffer catastrophic forgetting as graphs evolve. To address this gap, we present a systematic study of continual multimodal knowledge graph reasoning (CMMKGR). We construct several continual multimodal knowledge graph benchmarks from existing MMKG datasets and propose MRCKG, a new CMMKGR model. Specifically, MRCKG employs a multimodal-structural collaborative curriculum to schedule progressive learning based on the structural connectivity of new triples to the historical graph and their multimodal compatibility. It also introduces a cross-modal knowledge preservation mechanism to mitigate forgetting through entity representation stability, relational semantic consistency, and modality anchoring. In addition, a multimodal contrastive replay scheme with a two-stage optimization strategy reinforces learned knowledge via multimodal importance sampling and representation alignment. Experiments on multiple datasets show that MRCKG preserves previously learned multimodal knowledge while substantially improving the learning of new knowledge.
Problem

Research questions and friction points this paper is trying to address.

continual learning
multimodal knowledge graphs
catastrophic forgetting
knowledge graph reasoning
dynamic graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

continual learning
multimodal knowledge graph
catastrophic forgetting
curriculum learning
contrastive replay