🤖 AI Summary
Existing safety calibration studies for vision-language models (VLMs) predominantly address “under-safety” (responding to hazardous queries) while neglecting “over-safety” (rejecting benign queries), lacking systematic evaluation of bidirectional safety misalignment.
Method: We propose a unified safety calibration framework and introduce VSCBench—the first fine-grained benchmark comprising 3,600 image-text pairs with high visual/semantic similarity but divergent safety labels, enabling dual-perspective (image- and text-centered) evaluation. Safety-sensitive samples are generated via adversarial semantic perturbations and visual style transfer. We conduct zero-shot and fine-tuned evaluations across multiple models to quantify calibration efficacy and utility trade-offs.
Contribution/Results: Evaluating 11 state-of-the-art VLMs reveals pervasive bidirectional safety misalignment. All four representative calibration strategies incur substantial utility degradation, exposing a fundamental safety–utility trade-off bottleneck. VSCBench establishes the first standardized, reproducible platform for rigorous safety calibration assessment.
📝 Abstract
The rapid advancement of vision-language models (VLMs) has brought a lot of attention to their safety alignment. However, existing methods have primarily focused on model undersafety, where the model responds to hazardous queries, while neglecting oversafety, where the model refuses to answer safe queries. In this paper, we introduce the concept of $ extit{safety calibration}$, which systematically addresses both undersafety and oversafety. Specifically, we present $ extbf{VSCBench}$, a novel dataset of 3,600 image-text pairs that are visually or textually similar but differ in terms of safety, which is designed to evaluate safety calibration across image-centric and text-centric scenarios. Based on our benchmark, we evaluate safety calibration across eleven widely used VLMs. Our extensive experiments revealed major issues with both undersafety and oversafety. We further investigated four approaches to improve the model's safety calibration. We found that even though some methods effectively calibrated the models' safety problems, these methods also lead to the degradation of models' utility. This trade-off underscores the urgent need for advanced calibration methods, and our benchmark provides a valuable tool for evaluating future approaches. Our code and data are available at https://github.com/jiahuigeng/VSCBench.git.