Enhanced Continual Learning of Vision-Language Models with Model Fusion

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual language models (VLMs) suffer severe catastrophic forgetting in multi-task continual learning, while existing approaches often rely on auxiliary reference data, compromise zero-shot capability, or are constrained by parameter-efficient fine-tuning. This paper proposes Continual Decoupling-Unifying (ConDU), the first framework to introduce model fusion into VLM continual learning. ConDU employs a task-triggered mechanism and prototype-set modeling to enable dynamic task decoupling and iterative model unification, supporting both full-parameter and parameter-efficient fine-tuning paradigms. Additionally, it introduces a zero-shot inference strategy that aggregates predictions from multiple task-specific models to preserve the original generalization ability. Experiments demonstrate that ConDU surpasses state-of-the-art methods by 2% in average task performance, significantly enhances zero-shot transfer capability, and operates entirely without reference data.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) represent a breakthrough in artificial intelligence by integrating visual and textual modalities to achieve impressive zero-shot capabilities. However, VLMs are susceptible to catastrophic forgetting when sequentially fine-tuned on multiple downstream tasks. Existing continual learning methods for VLMs often rely heavily on additional reference datasets, compromise zero-shot performance, or are limited to parameter-efficient fine-tuning scenarios. In this paper, we propose Continual Decoupling-Unifying (ConDU), a novel approach, by introducing model fusion into continual learning for VLMs. ConDU maintains a unified model along with task triggers and prototype sets, employing an iterative process of decoupling task-specific models for previous tasks and unifying them with the model for the newly learned task. Additionally, we introduce an inference strategy for zero-shot scenarios by aggregating predictions from multiple decoupled task-specific models. Extensive experiments across various settings show that ConDU achieves up to a 2% improvement in average performance across all seen tasks compared to state-of-the-art baselines, while also enhancing zero-shot capabilities relative to the original VLM.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in Vision-Language Models during continual learning.
Proposes a model fusion approach to enhance zero-shot performance in VLMs.
Introduces a strategy to aggregate predictions from multiple task-specific models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model fusion in continual learning for VLMs
Decoupling and unifying task-specific models iteratively
Aggregating predictions for zero-shot inference enhancement
🔎 Similar Papers
No similar papers found.
H
Haoyuan Gao
Shanghai Jiao Tong University
Z
Zicong Zhang
Shanghai Jiao Tong University
Y
Yuqi Wei
Shanghai Jiao Tong University
Linglan Zhao
Linglan Zhao
Shanghai Jiao Tong University
Deep learningFew-shot learningMeta-learning
Guilin Li
Guilin Li
National University of Singapore
machine learningdeep learning
Yexin Li
Yexin Li
State Key Laboratory of General Artificial Intelligence BIGAI
reinforcement learningmulti-agent systemmulti-armed banditsdata mining
Linghe Kong
Linghe Kong
Shanghai Jiao Tong University
Internet of ThingsMobile computingBig data
W
Weiran Huang
Shanghai Jiao Tong University, Shanghai Innovation Institute, State Key Laboratory of General Artificial Intelligence, BIGAI