🤖 AI Summary
To address catastrophic forgetting in multimodal large language models (MLLMs) during continual visual question answering (VQA), this paper proposes a Dual-Momentum Mixture-of-Experts (DMoE) framework. DMoE introduces task-level and instance-level dual routers for dynamic expert selection and incorporates a relation-aware momentum-based parameter update mechanism to balance knowledge transfer and model stability. Distinct from conventional continual learning paradigms, DMoE is the first to deeply integrate MoE architecture with momentum-driven parameter updates, enabling synergistic co-evolution of old and new knowledge. Evaluated on ten standard continual VQA benchmarks, DMoE achieves state-of-the-art performance: it improves average accuracy by 5.2% and knowledge retention rate by 12.7% over existing methods.
📝 Abstract
Multimodal large language models (MLLMs) have garnered widespread attention from researchers due to their remarkable understanding and generation capabilities in visual language tasks (e.g., visual question answering). However, the rapid pace of knowledge updates in the real world makes offline training of MLLMs costly, and when faced with non-stationary data streams, MLLMs suffer from catastrophic forgetting during learning. In this paper, we propose an MLLMs-based dual momentum Mixture-of-Experts (CL-MoE) framework for continual visual question answering (VQA). We integrate MLLMs with continual learning to utilize the rich commonsense knowledge in LLMs. We introduce a Dual-Router MoE (RMoE) strategy to select the global and local experts using task-level and instance-level routers, to robustly assign weights to the experts most appropriate for the task. Then, we design a dynamic Momentum MoE (MMoE) to update the parameters of experts dynamically based on the relationships between the experts and tasks/instances, so that the model can absorb new knowledge while maintaining existing knowledge. The extensive experimental results indicate that our method achieves state-of-the-art performance on 10 VQA tasks, proving the effectiveness of our approach.