π€ AI Summary
Current safety mechanisms in image generation models are often circumvented and degrade output quality. This work proposes a training-free, post-hoc safety framework that precisely edits unsafe content after generation while leaving the original generator unchanged. The approach leverages Gemini-2.5-Flash as a general-purpose violation detector and introduces a vision-language modelβdriven spatial gating mechanism to enable instance-consistent, localized semantic editing across multi-concept scenes. Evaluated on a benchmark of 245 images, the method improves CLIP alignment by 0.121, reduces background distortion to an LPIPS score of 0.058, eliminates detections by NudeNet, and lowers human-reviewed violation identification rates from 95.99% to 10.16%.
π Abstract
Image-generative models are widely deployed across industries. Recent studies show that they can be exploited to produce policy-violating content. Existing mitigation strategies primarily operate at the pre- or mid-generation stages through techniques such as prompt filtering and safety-aware training/fine-tuning. Prior work shows that these approaches can be bypassed and often degrade generative quality. In this work, we propose ReVision, a training-free, prompt-based, post-hoc safety framework for image-generation pipeline. ReVision acts as a last-line defense by analyzing generated images and selectively editing unsafe concepts without altering the underlying generator. It uses the Gemini-2.5-Flash model as a generic policy-violating concept detector, avoiding reliance on multiple category-specific detectors, and performs localized semantic editing to replace unsafe content. Prior post-hoc editing methods often rely on imprecise spatial localization, that undermines usability and limits deployability, particularly in multi-concept scenes. To address this limitation, ReVision introduces a VLM-assisted spatial gating mechanism that enforces instance-consistent localization, enabling precise edits while preserving scene integrity. We evaluate ReVision on a 245-image benchmark covering both single- and multi-concept scenarios. Results show that ReVision (i) improves CLIP-based alignment toward safe prompts by +$0.121$ on average; (ii) significantly improves multi-concept background fidelity (LPIPS $0.166 \rightarrow 0.058$); (iii) achieves near-complete suppression on category-specific detectors (e.g., NudeNet $70.51 \rightarrow 0$); and (iv) reduces policy-violating content recognizability in a human moderation study from $95.99\%$ to $10.16\%$.