Harmonious Parameter Adaptation in Continual Visual Instruction Tuning for Safety-Aligned MLLMs

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In continual visual instruction tuning (CVIT), multimodal large language models (MLLMs) face the dual challenges of task forgetting and safety degradation during safety-aligned adaptation. To address this, we propose the Harmonious Parameter Adaptation (HPA) framework, which dynamically coordinates parameter updates in the post-training phase via three key mechanisms: attention-driven parameter partitioning, a task-safety balanced selection strategy, and orthogonal-constrained differential parameter updates. Unlike existing methods that prioritize either task performance or safety in isolation, HPA jointly optimizes both objectives. Experimental results demonstrate that HPA significantly improves task retention and safety robustness on standard CVIT benchmarks and comprehensive safety evaluation suites. To our knowledge, it is the first approach to achieve synergistic optimization of task performance and safety alignment, effectively mitigating catastrophic forgetting while preventing safety deterioration.

Technology Category

Application Category

📝 Abstract
While continual visual instruction tuning (CVIT) has shown promise in adapting multimodal large language models (MLLMs), existing studies predominantly focus on models without safety alignment. This critical oversight ignores the fact that real-world MLLMs inherently require such mechanisms to mitigate potential risks. In this work, we shift our focus to CVIT for safety-aligned MLLMs and observe that during continual adaptation, the model not only suffers from task forgetting but also exhibits degradation in its safety. Achieving a harmonious balance between safety and task performance remains a crucial challenge. To address this, we propose Harmonious Parameter Adaptation (HPA), a post-training framework composed of focusing-based parameter partition, harmoniously balanced parameter selection, and orthogonal parameter adjustment. Specifically, HPA partitions parameters into two types based on their focus on safety or task performance, and selects the focused ones to preserve from a balanced perspective. In addition, HPA imposes orthogonality constraints on parameter updates to further alleviate catastrophic forgetting. Extensive experiments on the CVIT benchmark and safety evaluation datasets demonstrate that HPA better maintains high safety and mitigates forgetting than existing baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses safety degradation during continual visual instruction tuning
Balances task performance with safety preservation in MLLMs
Mitigates catastrophic forgetting while maintaining safety alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Focusing-based parameter partition for safety and task
Harmoniously balanced parameter selection from dual perspectives
Orthogonal parameter adjustment to alleviate catastrophic forgetting
🔎 Similar Papers
No similar papers found.
Z
Ziqi Wang
Hefei University of Technology
C
Chang Che
Hefei University of Technology
Q
Qi Wang
Tsinghua University
H
Hui Ma
Hefei University of Technology
Zenglin Shi
Zenglin Shi
Professor of Artificial Intelligence, Hefei University of Technology
Deep LearningComputer VisionMachine LearningMultimedia
Cees G. M. Snoek
Cees G. M. Snoek
Professor of Computer Science, University of Amsterdam
Video Understanding:computer visionmultimodal learningmachine learningartificial intelligence
M
Meng Wang
Hefei University of Technology