Enhancing Vision-Language Model Safety through Progressive Concept-Bottleneck-Driven Alignment

📅 2024-11-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) suffer from vulnerability to adversarial visual perturbations and weak safety alignment, compromising their reliability in security-critical applications. Method: We propose PSA-VLM, a progressive concept-bottleneck-driven safety alignment framework. It integrates a lightweight visual safety module as an interpretable concept bottleneck layer and introduces a novel two-stage progressive alignment mechanism: Stage I establishes foundational safety semantic alignment, while Stage II enforces fine-grained vision–language joint constraints. Contribution/Results: PSA-VLM significantly enhances robustness against malicious images and improves safety controllability—without degrading general-purpose performance. It achieves state-of-the-art results on mainstream VLM safety benchmarks; notable improvements emerge already in Stage I, and the full method delivers strong defense capability with minimal computational overhead.

Technology Category

Application Category

📝 Abstract
Benefiting from the powerful capabilities of Large Language Models (LLMs), pre-trained visual encoder models connected to LLMs form Vision Language Models (VLMs). However, recent research shows that the visual modality in VLMs is highly vulnerable, allowing attackers to bypass safety alignment in LLMs through visually transmitted content, launching harmful attacks. To address this challenge, we propose a progressive concept-based alignment strategy, PSA-VLM, which incorporates safety modules as concept bottlenecks to enhance visual modality safety alignment. By aligning model predictions with specific safety concepts, we improve defenses against risky images, enhancing explainability and controllability while minimally impacting general performance. Our method is obtained through two-stage training. The low computational cost of the first stage brings very effective performance improvement, and the fine-tuning of the language model in the second stage further improves the safety performance. Our method achieves state-of-the-art results on popular VLM safety benchmark.
Problem

Research questions and friction points this paper is trying to address.

Visual Language Model Security
Malicious Input Defense
Model Reliability Enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

PSA-VLM
Visual Language Model Security
Malicious Image Defense
🔎 Similar Papers
No similar papers found.
Zhendong Liu
Zhendong Liu
Nanjing University
Trustworthy AIExplainable AISafety
Y
Yuanbi Nie
School of Electrical Engineering, Chongqing University, Chongqing, China
Y
Yingshui Tan
Alibaba Group, Hangzhou, Zhejiang Province, China
Xiangyu Yue
Xiangyu Yue
The Chinese University of Hong Kong / UC Berkeley / Stanford University / NJU
Artificial IntelligenceComputer VisionMulti-modal Learning
Qiushi Cui
Qiushi Cui
School of Electrical Engineering, Chongqing University, Chongqing, China
C
Chong-Jun Wang
Department of Computer Science and Technology, Nanjing University, Nanjing, Jiangsu Province, China
Xiaoyong Zhu
Xiaoyong Zhu
Jiangsu University
Electrical MachinesElectrical Vehicle
B
Bo Zheng
Alibaba Group, Hangzhou, Zhejiang Province, China