π€ AI Summary
Catastrophic forgetting in continual learning hinders retention of fine-grained knowledge for previously seen classes under class-incremental learning (CIL). To address this, we propose a knowledge graph-enhanced generative multimodal CIL framework. First, we construct and dynamically evolve a category-semantic knowledge graph to explicitly model inter-class relationships. Second, we design a relation-aware label augmentation mechanism to enhance discriminative representation learning. Third, we introduce a knowledge graphβdriven text generation module for inference, enabling fine-grained class localization and error correction. This work is the first to integrate structured knowledge graphs into a generative multimodal continual learning framework, achieving both robust old-class memory preservation and adaptive integration of new classes. Our method achieves state-of-the-art performance on standard and few-shot CIL benchmarks, significantly reducing forgetting rates while improving cross-task generalization and error correction capabilities.
π Abstract
Continual learning in computer vision faces the critical challenge of catastrophic forgetting, where models struggle to retain prior knowledge while adapting to new tasks. Although recent studies have attempted to leverage the generalization capabilities of pre-trained models to mitigate overfitting on current tasks, models still tend to forget details of previously learned categories as tasks progress, leading to misclassification. To address these limitations, we introduce a novel Knowledge Graph Enhanced Generative Multi-modal model (KG-GMM) that builds an evolving knowledge graph throughout the learning process. Our approach utilizes relationships within the knowledge graph to augment the class labels and assigns different relations to similar categories to enhance model differentiation. During testing, we propose a Knowledge Graph Augmented Inference method that locates specific categories by analyzing relationships within the generated text, thereby reducing the loss of detailed information about old classes when learning new knowledge and alleviating forgetting. Experiments demonstrate that our method effectively leverages relational information to help the model correct mispredictions, achieving state-of-the-art results in both conventional CIL and few-shot CIL settings, confirming the efficacy of knowledge graphs at preserving knowledge in the continual learning scenarios.