🤖 AI Summary
Multimodal large language models (MLLMs) suffer from catastrophic forgetting when undergoing continual learning in dynamic, real-world visual environments. Method: To address this, we introduce MSVQA—the first multi-scene visual question answering dataset covering four distinct domains: high-altitude, underwater, low-altitude, and indoor scenes—and propose UNIFIER, a novel continual learning framework. UNIFIER incorporates a scene-aware branching mechanism within each Vision Transformer block to disentangle heterogeneous visual inputs into a unified feature space, enforced by cross-branch consistency constraints to enhance representation stability. Contribution/Results: Compared to standard fine-tuning and state-of-the-art continual learning baselines, UNIFIER significantly mitigates cross-scene forgetting on MSVQA. It preserves performance on previously learned scenes while effectively accumulating knowledge from newly encountered ones, demonstrating strong efficacy and generalizability in enhancing MLLMs’ long-term visual understanding under realistic, non-stationary data streams.
📝 Abstract
Continual learning in visual understanding aims to deal with catastrophic forgetting in Multimodal Large Language Models (MLLMs). MLLMs deployed on devices have to continuously adapt to dynamic scenarios in downstream tasks, such as variations in background and perspective, to effectively perform complex visual tasks. To this end, we construct a multimodal visual understanding dataset (MSVQA) encompassing four different scenarios and perspectives including high altitude, underwater, low altitude and indoor, to investigate the catastrophic forgetting in MLLMs under the dynamics of scenario shifts in real-world data streams. Furthermore, we propose mUltimodal coNtInual learning with MLLMs From multi-scenarIo pERspectives (UNIFIER) to address visual discrepancies while learning different scenarios. Specifically, it decouples the visual information from different scenarios into distinct branches within each vision block and projects them into the same feature space. A consistency constraint is imposed on the features of each branch to maintain the stability of visual representations across scenarios. Extensive experiments on the MSVQA dataset demonstrate that UNIFIER effectively alleviates forgetting of cross-scenario tasks and achieves knowledge accumulation within the same scenario.