π€ AI Summary
Existing continual learning methods are predominantly unimodal and thus struggle with catastrophic forgetting when confronted with multimodal task streamsβsuch as images, videos, audio, depth, and text. This paper introduces the first unified framework for multimodal continual learning. Our method features a cross-modal knowledge aggregation mechanism that jointly leverages intra-modal self-supervised regularization and inter-modal contribution-aware alignment. To mitigate alignment inaccuracies induced by modality bias, we propose a modality-embedding recalibration strategy. Crucially, our approach operates without explicit modality identifiers, enabling modality-agnostic dynamic realignment of the embedding space throughout training. Evaluated on a comprehensive multimodal continual learning benchmark, our framework substantially outperforms existing state-of-the-art methods, demonstrating both strong generalization across diverse modalities and practical deployability.
π Abstract
Continual learning aims to learn knowledge of tasks observed in sequential time steps while mitigating the forgetting of previously learned knowledge. Existing methods were proposed under the assumption of learning a single modality (e.g., image) over time, which limits their applicability in scenarios involving multiple modalities. In this work, we propose a novel continual learning framework that accommodates multiple modalities (image, video, audio, depth, and text). We train a model to align various modalities with text, leveraging its rich semantic information. However, this increases the risk of forgetting previously learned knowledge, exacerbated by the differing input traits of each task. To alleviate the overwriting of the previous knowledge of modalities, we propose a method for aggregating knowledge within and across modalities. The aggregated knowledge is obtained by assimilating new information through self-regularization within each modality and associating knowledge between modalities by prioritizing contributions from relevant modalities. Furthermore, we propose a strategy that re-aligns the embeddings of modalities to resolve biased alignment between modalities. We evaluate the proposed method in a wide range of continual learning scenarios using multiple datasets with different modalities. Extensive experiments demonstrate that ours outperforms existing methods in the scenarios, regardless of whether the identity of the modality is given.