Two Birds, One Projection: Harmonizing Safety and Utility in LVLMs via Inference-time Feature Projection

๐Ÿ“… 2026-03-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the prevailing trade-off between safety and general visual reasoning capability in large vision-language models under jailbreak attacks. Challenging the assumption that these objectives are inherently conflicting, the study introduces an efficient inference-time defense mechanism that enhances both robustness and performance. By identifying modality-induced bias directions in cross-modal features, the method projects representations into the null space of these biases. Requiring only a single forward pass, this approach simultaneously improves resilience against jailbreak attacks and boosts general visual understanding, demonstrably surpassing the conventional safetyโ€“utility trade-off across multiple benchmarks.

Technology Category

Application Category

๐Ÿ“ Abstract
Existing jailbreak defence frameworks for Large Vision-Language Models often suffer from a safety utility tradeoff, where strengthening safety inadvertently degrades performance on general visual-grounded reasoning tasks. In this work, we investigate whether safety and utility are inherently antagonistic objectives. We focus on a modality induced bias direction consistently observed across datasets, which arises from suboptimal coupling between the Large Language Model backbone and visual encoders. We further demonstrate that this direction undermines performance on both tasks. Leveraging this insight, we propose Two Birds, One Projection, an efficient inference time jailbreak defence that projects cross-modal features onto the null space of the identified bias direction to remove the corresponding components. Requiring only a single forward pass, our method effectively breaks the conventional tradeoff, simultaneously improving both safety and utility across diverse benchmarks.
Problem

Research questions and friction points this paper is trying to address.

safety-utility tradeoff
jailbreak defence
Large Vision-Language Models
visual-grounded reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

inference-time projection
safety-utility tradeoff
modality-induced bias
jailbreak defense
null space projection
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yewon Han
Graduate School of Data Science, Seoul National University
Y
Yumin Seol
Graduate School of Data Science, Seoul National University
E
EunGyung Kong
Mobilint
M
Minsoo Jo
Graduate School of Data Science, Seoul National University
Taesup Kim
Taesup Kim
Assistant Professor, Seoul National University
Representation LearningTransfer LearningAIMachine LearningDeep Learning