Multimodal Continual Learning with MLLMs from Multi-scenario Perspectives

📅 2025-11-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from catastrophic forgetting when undergoing continual learning in dynamic, real-world visual environments. Method: To address this, we introduce MSVQA—the first multi-scene visual question answering dataset covering four distinct domains: high-altitude, underwater, low-altitude, and indoor scenes—and propose UNIFIER, a novel continual learning framework. UNIFIER incorporates a scene-aware branching mechanism within each Vision Transformer block to disentangle heterogeneous visual inputs into a unified feature space, enforced by cross-branch consistency constraints to enhance representation stability. Contribution/Results: Compared to standard fine-tuning and state-of-the-art continual learning baselines, UNIFIER significantly mitigates cross-scene forgetting on MSVQA. It preserves performance on previously learned scenes while effectively accumulating knowledge from newly encountered ones, demonstrating strong efficacy and generalizability in enhancing MLLMs’ long-term visual understanding under realistic, non-stationary data streams.

Technology Category

Application Category

📝 Abstract
Continual learning in visual understanding aims to deal with catastrophic forgetting in Multimodal Large Language Models (MLLMs). MLLMs deployed on devices have to continuously adapt to dynamic scenarios in downstream tasks, such as variations in background and perspective, to effectively perform complex visual tasks. To this end, we construct a multimodal visual understanding dataset (MSVQA) encompassing four different scenarios and perspectives including high altitude, underwater, low altitude and indoor, to investigate the catastrophic forgetting in MLLMs under the dynamics of scenario shifts in real-world data streams. Furthermore, we propose mUltimodal coNtInual learning with MLLMs From multi-scenarIo pERspectives (UNIFIER) to address visual discrepancies while learning different scenarios. Specifically, it decouples the visual information from different scenarios into distinct branches within each vision block and projects them into the same feature space. A consistency constraint is imposed on the features of each branch to maintain the stability of visual representations across scenarios. Extensive experiments on the MSVQA dataset demonstrate that UNIFIER effectively alleviates forgetting of cross-scenario tasks and achieves knowledge accumulation within the same scenario.
Problem

Research questions and friction points this paper is trying to address.

Address catastrophic forgetting in Multimodal Large Language Models during continual learning
Adapt MLLMs to dynamic scenario shifts in real-world visual tasks
Mitigate visual discrepancies when learning from multiple scenario perspectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples visual information into distinct branches
Projects multimodal data into same feature space
Applies consistency constraint across scenario features
🔎 Similar Papers
No similar papers found.
Kai Jiang
Kai Jiang
School of Mathematics and Computational Science
Quasiperiodic SystemsApplied Mathematics & Computational Mathematics
S
Siqi Huang
School of Artificial Intelligence, OPtics and ElectroNics, Northwestern Polytechnical University
X
Xiangyu Chen
Institute of Artificial Intelligence (TeleAI) of China Telecom
J
Jiawei Shao
Institute of Artificial Intelligence (TeleAI) of China Telecom
H
Hongyuan Zhang
University of Hong Kong
X
Xuelong Li
Institute of Artificial Intelligence (TeleAI) of China Telecom