π€ AI Summary
Traditional image safety models rely on feature learning without semantic reasoning, resulting in low accuracy for harmful content detection, poor adaptability to emerging threats, and frequent retraining requirements. This paper introduces the first semantic-aware image safety system supporting dynamic policy alignment. Our approach addresses these limitations through three key contributions: (1) constructing VisionHarm, a high-fine-grained, multi-category dataset of harmful images; (2) proposing a policy-following training framework that enables real-time adaptation to new safety policies without retraining, integrating a customized loss function and diverse question-answering generation; and (3) enhancing model interpretability and attribution capability via efficient synthetic data augmentation. On VisionHarm-T and VisionHarm-C benchmarks, our method achieves +8.6% and +15.5% accuracy improvements over GPT-4o, respectively, while accelerating inference by 16Γβachieving state-of-the-art performance with efficient deployment.
π Abstract
With the rapid proliferation of digital media, the need for efficient and transparent safeguards against unsafe content is more critical than ever. Traditional image guardrail models, constrained by predefined categories, often misclassify content due to their pure feature-based learning without semantic reasoning. Moreover, these models struggle to adapt to emerging threats, requiring costly retraining for new threats. To address these limitations, we introduce SafeVision, a novel image guardrail that integrates human-like reasoning to enhance adaptability and transparency. Our approach incorporates an effective data collection and generation framework, a policy-following training pipeline, and a customized loss function. We also propose a diverse QA generation and training strategy to enhance learning effectiveness. SafeVision dynamically aligns with evolving safety policies at inference time, eliminating the need for retraining while ensuring precise risk assessments and explanations. Recognizing the limitations of existing unsafe image benchmarks, which either lack granularity or cover limited risks, we introduce VisionHarm, a high-quality dataset comprising two subsets: VisionHarm Third-party (VisionHarm-T) and VisionHarm Comprehensive(VisionHarm-C), spanning diverse harmful categories. Through extensive experiments, we show that SafeVision achieves state-of-the-art performance on different benchmarks. SafeVision outperforms GPT-4o by 8.6% on VisionHarm-T and by 15.5% on VisionHarm-C, while being over 16x faster. SafeVision sets a comprehensive, policy-following, and explainable image guardrail with dynamic adaptation to emerging threats.