CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answering

📅 2025-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting in multimodal large language models (MLLMs) during continual visual question answering (VQA), this paper proposes a Dual-Momentum Mixture-of-Experts (DMoE) framework. DMoE introduces task-level and instance-level dual routers for dynamic expert selection and incorporates a relation-aware momentum-based parameter update mechanism to balance knowledge transfer and model stability. Distinct from conventional continual learning paradigms, DMoE is the first to deeply integrate MoE architecture with momentum-driven parameter updates, enabling synergistic co-evolution of old and new knowledge. Evaluated on ten standard continual VQA benchmarks, DMoE achieves state-of-the-art performance: it improves average accuracy by 5.2% and knowledge retention rate by 12.7% over existing methods.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have garnered widespread attention from researchers due to their remarkable understanding and generation capabilities in visual language tasks (e.g., visual question answering). However, the rapid pace of knowledge updates in the real world makes offline training of MLLMs costly, and when faced with non-stationary data streams, MLLMs suffer from catastrophic forgetting during learning. In this paper, we propose an MLLMs-based dual momentum Mixture-of-Experts (CL-MoE) framework for continual visual question answering (VQA). We integrate MLLMs with continual learning to utilize the rich commonsense knowledge in LLMs. We introduce a Dual-Router MoE (RMoE) strategy to select the global and local experts using task-level and instance-level routers, to robustly assign weights to the experts most appropriate for the task. Then, we design a dynamic Momentum MoE (MMoE) to update the parameters of experts dynamically based on the relationships between the experts and tasks/instances, so that the model can absorb new knowledge while maintaining existing knowledge. The extensive experimental results indicate that our method achieves state-of-the-art performance on 10 VQA tasks, proving the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in multimodal large language models
Proposes a dual momentum Mixture-of-Experts framework for continual learning
Enhances visual question answering with dynamic knowledge integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-Router MoE strategy for expert selection
Dynamic Momentum MoE for parameter updates
Integration of MLLMs with continual learning
🔎 Similar Papers
No similar papers found.
Tianyu Huai
Tianyu Huai
East China Normal University
Continual Learning
J
Jie Zhou
School of Computer Science and Technology, East China Normal University
Xingjiao Wu
Xingjiao Wu
East China Normal University
Computer VisionCrowd CountingDocument Layout AnalysisHuman-in-the-loop
Q
Qin Chen
School of Computer Science and Technology, East China Normal University
Q
Qingchun Bai
Shanghai Open University, Shanghai, China
Z
Ze Zhou
ZhuQingTing Data Technology (Zhejiang) Co., Ltd.
L
Liang He
School of Computer Science and Technology, East China Normal University