Learning without Forgetting for Vision-Language Models

๐Ÿ“… 2023-05-30
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 29
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address catastrophic forgetting and insufficient multimodal synergy in class-incremental learning (CIL) for vision-language models (VLMs), this paper proposes PROOF: a framework that freezes pretrained image and text encoders to preserve general-purpose representations, dynamically expands task-specific linear projection heads, and introduces a cross-modal fusion module for joint alignment and adaptive integration of visual and textual features. By decoupling representation learning from task adaptation, PROOF avoids parameter redundancy and interference, effectively balancing stability and plasticity. Evaluated on nine standard CIL benchmarks, PROOF consistently outperforms state-of-the-art methodsโ€”reducing average forgetting by 32.7%, improving incremental-stage accuracy, and enhancing cross-task generalization. The framework establishes an efficient, scalable paradigm for multimodal continual learning.
๐Ÿ“ Abstract
Class-Incremental Learning (CIL) or continual learning is a desired capability in the real world, which requires a learning system to adapt to new tasks without forgetting former ones. While traditional CIL methods focus on visual information to grasp core features, recent advances in Vision-Language Models (VLM) have shown promising capabilities in learning generalizable representations with the aid of textual information. However, when continually trained with new classes, VLMs often suffer from catastrophic forgetting of former knowledge. Applying VLMs to CIL poses two major challenges: 1) how to adapt the model without forgetting; and 2) how to make full use of the multi-modal information. To this end, we propose PROjectiOn Fusion (PROOF) that enables VLMs to learn without forgetting. To handle the first challenge, we propose training task-specific projections based on the frozen image/text encoders. When facing new tasks, new projections are expanded and former projections are fixed, alleviating the forgetting of old concepts. For the second challenge, we propose the fusion module to better utilize the cross-modality information. By jointly adjusting visual and textual features, the model can capture semantic information with stronger representation ability. Extensive experiments on nine benchmark datasets validate PROOF achieves state-of-the-art performance. Code is available at https://github.com/zhoudw-zdw/PROOF
Problem

Research questions and friction points this paper is trying to address.

Address catastrophic forgetting in Vision-Language Models
Enhance multi-modal information utilization in CIL
Enable continual learning without forgetting old knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

PROOF enables VLMs to learn without forgetting
Task-specific projections prevent forgetting old concepts
Fusion module enhances cross-modality information utilization
๐Ÿ”Ž Similar Papers
No similar papers found.