LLaVA-c: Continual Improved Visual Instruction Tuning

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models face critical challenges in visual instruction tuning, including task imbalance, catastrophic forgetting upon introducing new tasks, and degradation of foundational general-purpose capabilities after continual learning. To address these issues, this paper proposes a novel incremental visual instruction tuning paradigm. Our method introduces two key innovations: (1) a spectrum-aware consolidation mechanism—leveraging spectral analysis to selectively freeze parameters—and (2) unsupervised exploratory regularization, comprising contrastive instruction diversity constraints and progressive vision-language alignment. Implemented on the LLaVA-1.5 architecture, our approach trains tasks sequentially in a single-task setting, yet achieves performance comparable to or exceeding that of multi-task joint training. On standard visual understanding benchmarks, it significantly mitigates forgetting while fully preserving general instruction-following capability and cross-task generalization performance.

Technology Category

Application Category

📝 Abstract
Multimodal models like LLaVA-1.5 achieve state-of-the-art visual understanding through visual instruction tuning on multitask datasets, enabling strong instruction-following and multimodal performance. However, multitask learning faces challenges such as task balancing, requiring careful adjustment of data proportions, and expansion costs, where new tasks risk catastrophic forgetting and need costly retraining. Continual learning provides a promising alternative to acquiring new knowledge incrementally while preserving existing capabilities. However, current methods prioritize task-specific performance, neglecting base model degradation from overfitting to specific instructions, which undermines general capabilities. In this work, we propose a simple but effective method with two modifications on LLaVA-1.5: spectral-aware consolidation for improved task balance and unsupervised inquiry regularization to prevent base model degradation. We evaluate both general and task-specific performance across continual pretraining and fine-tuning. Experiments demonstrate that LLaVA-c consistently enhances standard benchmark performance and preserves general capabilities. For the first time, we show that task-by-task continual learning can achieve results that match or surpass multitask joint learning. The code will be publicly released.
Problem

Research questions and friction points this paper is trying to address.

Addresses task balancing in multimodal continual learning
Prevents base model degradation from overfitting
Enhances performance without costly retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spectral-aware consolidation for task balance
Unsupervised inquiry regularization prevents degradation
Continual learning matches multitask joint performance
🔎 Similar Papers
No similar papers found.
W
Wenzhuo Liu
School of Artificial Intelligence, UCAS; State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA
F
Fei Zhu
Centre for Artificial Intelligence and Robotics, HKISI-CAS
Haiyang Guo
Haiyang Guo
Institute of Automation, Chinese Academy of Sciences
Continual LearningMultimodal LearningPattern Recognition
Longhui Wei
Longhui Wei
Senior Researcher, Huawei
multimodal&Visual pre-trainingVLMMultimodal Generation
C
Cheng-Lin Liu
School of Artificial Intelligence, UCAS; State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA