🤖 AI Summary
Multimodal large language models face critical challenges in visual instruction tuning, including task imbalance, catastrophic forgetting upon introducing new tasks, and degradation of foundational general-purpose capabilities after continual learning. To address these issues, this paper proposes a novel incremental visual instruction tuning paradigm. Our method introduces two key innovations: (1) a spectrum-aware consolidation mechanism—leveraging spectral analysis to selectively freeze parameters—and (2) unsupervised exploratory regularization, comprising contrastive instruction diversity constraints and progressive vision-language alignment. Implemented on the LLaVA-1.5 architecture, our approach trains tasks sequentially in a single-task setting, yet achieves performance comparable to or exceeding that of multi-task joint training. On standard visual understanding benchmarks, it significantly mitigates forgetting while fully preserving general instruction-following capability and cross-task generalization performance.
📝 Abstract
Multimodal models like LLaVA-1.5 achieve state-of-the-art visual understanding through visual instruction tuning on multitask datasets, enabling strong instruction-following and multimodal performance. However, multitask learning faces challenges such as task balancing, requiring careful adjustment of data proportions, and expansion costs, where new tasks risk catastrophic forgetting and need costly retraining. Continual learning provides a promising alternative to acquiring new knowledge incrementally while preserving existing capabilities. However, current methods prioritize task-specific performance, neglecting base model degradation from overfitting to specific instructions, which undermines general capabilities. In this work, we propose a simple but effective method with two modifications on LLaVA-1.5: spectral-aware consolidation for improved task balance and unsupervised inquiry regularization to prevent base model degradation. We evaluate both general and task-specific performance across continual pretraining and fine-tuning. Experiments demonstrate that LLaVA-c consistently enhances standard benchmark performance and preserves general capabilities. For the first time, we show that task-by-task continual learning can achieve results that match or surpass multitask joint learning. The code will be publicly released.