🤖 AI Summary
This work addresses catastrophic forgetting in fine-tuning multimodal large language models (MLLMs), which often severely degrades their pretrained capabilities. The authors propose a data-free sparse fine-tuning method that, for the first time, integrates weight magnitude, input activation, and output sensitivity—without access to original training data—to construct a parameter importance scoring mechanism. High-importance parameters are selectively frozen to mitigate forgetting. Leveraging a data-agnostic importance probing technique, the approach scales efficiently to billion-parameter models. Experiments on LLaVA and NVILA demonstrate substantial improvements over existing methods, effectively preserving pretrained knowledge while maintaining computational efficiency.
📝 Abstract
Fine-tuning Multimodal Large Language Models (MLLMs) on task-specific data is an effective way to improve performance on downstream applications. However, such adaptation often leads to a degradation in generalization on pretrained tasks, a phenomenon known as Catastrophic Forgetting. Existing methods that aim to mitigate this issue either become ineffective when fine-tuning deeper layers of the language decoder or scale poorly with increasing model size. To address these limitations, we propose Model-Dowser, a novel sparse fine-tuning approach for MLLMs. Model-Dowser measures a principled importance score for each model parameter with respect to pretrained generalization (prior to downstream adaptation) by jointly considering weight magnitudes, input activations, and output sensitivities. During fine-tuning, Model-Dowser selectively preserves high-importance parameters and updates the remaining. Comprehensive experiments on two representative MLLMs, LLaVA and NVILA, demonstrate that Model-Dowser effectively mitigates catastrophic forgetting and consistently outperforms prior methods, while remaining resource-efficient and scalable to multi-billion-parameter models.