SafeR-CLIP: Mitigating NSFW Content in Vision-Language Models While Preserving Pre-Trained Knowledge

πŸ“… 2025-11-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the trade-off between safety enhancement and generalization degradation in secure fine-tuning of vision-language models (e.g., CLIP). We propose a proximity-aware fine-tuning strategy that redirects unsafe concepts in the representation space toward their semantically nearest safe substitutes, thereby minimizing perturbation to the pretrained geometric structure. Built upon the CLIP framework, our method integrates semantic-distance-aware redirection, zero-shot classification evaluation, and adversarial fine-tuning to achieve safe alignment with minimal intervention. Our contributions are threefold: (1) a representation-preserving paradigm for safe fine-tuning; (2) NSFW-Capsβ€”the first safety evaluation benchmark explicitly designed for distribution shift; and (3) state-of-the-art safety performance with up to 8.0% recovery in zero-shot accuracy, significantly outperforming existing approaches and empirically validating the critical role of preserving pretrained semantic geometry.

Technology Category

Application Category

πŸ“ Abstract
Improving the safety of vision-language models like CLIP via fine-tuning often comes at a steep price, causing significant drops in their generalization performance. We find this trade-off stems from rigid alignment strategies that force unsafe concepts toward single, predefined safe targets, disrupting the model's learned semantic structure. To address this, we propose a proximity-aware approach: redirecting unsafe concepts to their semantically closest safe alternatives to minimize representational change. We introduce SaFeR-CLIP, a fine-tuning framework that applies this principle of minimal intervention. SaFeR-CLIP successfully reconciles safety and performance, recovering up to 8.0% in zero-shot accuracy over prior methods while maintaining robust safety. To support more rigorous evaluation, we also contribute NSFW-Caps, a new benchmark of 1,000 highly-aligned pairs for testing safety under distributional shift. Our work shows that respecting the geometry of pretrained representations is key to achieving safety without sacrificing performance.
Problem

Research questions and friction points this paper is trying to address.

Mitigating NSFW content in vision-language models while preserving knowledge
Reducing safety-performance trade-off by redirecting unsafe concepts semantically
Developing benchmark for evaluating safety under distributional shifts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Redirecting unsafe concepts to closest safe alternatives
Fine-tuning framework with minimal intervention principle
Preserving pretrained semantic structure for safety
πŸ”Ž Similar Papers
No similar papers found.