Do We Really Need Curated Malicious Data for Safety Alignment in Multi-modal Large Language Models?

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient safety alignment of multimodal large language models (MLLMs) under visual-domain attacks (e.g., typographic manipulation), revealing that the root cause lies in training data distribution bias—not inherent susceptibility to malicious samples. We propose a lightweight, attack-agnostic alignment method requiring no adversarial data: by fine-tuning on only ~10% benign instruction data, we inject explicit refusal responses and correct distributional biases. Through instruction tuning, bias analysis, and controlled ablation studies, we empirically demonstrate—for the first time—that safety alignment capability is not lost but merely masked during standard training. Experiments show that, without any malicious examples, our method achieves safety improvements comparable to those obtained using the full adversarial dataset, reducing attack success rates by over 60%. This significantly alleviates data dependency and lowers the cost of constructing safety-aligned MLLMs.

Technology Category

Application Category

📝 Abstract
Multi-modal large language models (MLLMs) have made significant progress, yet their safety alignment remains limited. Typically, current open-source MLLMs rely on the alignment inherited from their language module to avoid harmful generations. However, the lack of safety measures specifically designed for multi-modal inputs creates an alignment gap, leaving MLLMs vulnerable to vision-domain attacks such as typographic manipulation. Current methods utilize a carefully designed safety dataset to enhance model defense capability, while the specific knowledge or patterns acquired from the high-quality dataset remain unclear. Through comparison experiments, we find that the alignment gap primarily arises from data distribution biases, while image content, response quality, or the contrastive behavior of the dataset makes little contribution to boosting multi-modal safety. To further investigate this and identify the key factors in improving MLLM safety, we propose finetuning MLLMs on a small set of benign instruct-following data with responses replaced by simple, clear rejection sentences. Experiments show that, without the need for labor-intensive collection of high-quality malicious data, model safety can still be significantly improved, as long as a specific fraction of rejection data exists in the finetuning set, indicating the security alignment is not lost but rather obscured during multi-modal pretraining or instruction finetuning. Simply correcting the underlying data bias could narrow the safety gap in the vision domain.
Problem

Research questions and friction points this paper is trying to address.

Addressing safety alignment gaps in multi-modal large language models
Investigating data distribution biases in multi-modal safety measures
Improving model safety without curated malicious datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning MLLMs with benign instruct-following data
Replacing responses with simple rejection sentences
Correcting data bias to improve safety alignment
🔎 Similar Papers
No similar papers found.
Y
Yanbo Wang
School of Artificial Intelligence, University of Chinese Academy of Sciences NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences
Jiyang Guan
Jiyang Guan
Institute of Automation, Chinese Academy of Sciences
AI Safety
Jian Liang
Jian Liang
Kuaishou Inc.
transfer learninggraph learning
R
Ran He
School of Artificial Intelligence, University of Chinese Academy of Sciences NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences