Principled Steering via Null-space Projection for Jailbreak Defense in Vision-Language Models

πŸ“… 2026-03-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of vision-language models to visual jailbreak attacks in open-world settings, where existing defenses often compromise utility by overly rejecting benign inputs. To overcome this limitation, the authors propose NullSteer, a novel framework that introduces null-space projection in activation space. Specifically, NullSteer dynamically projects activation vectors of potentially harmful inputs onto the orthogonal complement of a benign subspace via a linear transformation, thereby enhancing rejection capability while strictly preserving the model’s behavior on benign inputs without perturbation. Evaluated on MiniGPT-4, NullSteer reduces the average attack success rate by over 15% and maintains performance comparable to the original model on standard benchmark tasks, effectively achieving a balance between security and utility.

Technology Category

Application Category

πŸ“ Abstract
As vision-language models (VLMs) are increasingly deployed in open-world scenarios, they can be easily induced by visual jailbreak attacks to generate harmful content, posing serious risks to model safety and trustworthy usage. Recent activation steering methods inject directional vectors into model activations during inference to induce refusal behaviors and have demonstrated effectiveness. However, a steering vector may both enhance refusal ability and cause over-refusal, thereby degrading model performance on benign inputs. Moreover, due to the lack of theoretical interpretability, these methods still suffer from limited robustness and effectiveness. To better balance safety and utility, we propose NullSteer, a null-space projected activation defense framework. Our method constructs refusal directions within model activations through a linear transformation: it maintains zero perturbation within the benign subspace while dynamically inducing refusal along potentially harmful directions, thereby theoretically achieving safety enhancement without impairing the model's general capabilities. Extensive experiments show that NullSteer significantly reduces harmful outputs under various jailbreak attacks (average ASR reduction over 15 percent on MiniGPT-4) while maintaining comparable performance to the original model on general benchmarks.
Problem

Research questions and friction points this paper is trying to address.

jailbreak defense
vision-language models
over-refusal
model safety
harmful content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Null-space Projection
Activation Steering
Jailbreak Defense
Vision-Language Models
Safety-Utility Trade-off
πŸ”Ž Similar Papers
No similar papers found.