🤖 AI Summary
Safety alignment in text-to-image (T2I) models often induces over-refusal—i.e., erroneous rejection of harmless prompts—undermining practical utility. Prior work lacks a large-scale, systematic benchmark for evaluating this phenomenon. Method: We introduce OVERT, the first dedicated benchmark comprising 4,600 semantically benign yet superficially suspicious prompts and 1,785 genuinely harmful ones, accompanied by a standardized evaluation framework. Contributions/Results: (1) A novel synthetic data generation method constrained by both semantic fidelity and safety policy adherence; (2) configurable prompt engineering enabling user-defined safety policies; (3) empirical evidence demonstrating the pervasive nature of over-refusal across multiple safety dimensions and quantifying the substantial impact of prompt rewriting on semantic fidelity. Extensive evaluation across mainstream T2I models validates the framework’s effectiveness and generalizability.
📝 Abstract
Text-to-Image (T2I) models have achieved remarkable success in generating visual content from text inputs. Although multiple safety alignment strategies have been proposed to prevent harmful outputs, they often lead to overly cautious behavior -- rejecting even benign prompts -- a phenomenon known as $ extit{over-refusal}$ that reduces the practical utility of T2I models. Despite over-refusal having been observed in practice, there is no large-scale benchmark that systematically evaluates this phenomenon for T2I models. In this paper, we present an automatic workflow to construct synthetic evaluation data, resulting in OVERT ($ extbf{OVE}$r-$ extbf{R}$efusal evaluation on $ extbf{T}$ext-to-image models), the first large-scale benchmark for assessing over-refusal behaviors in T2I models. OVERT includes 4,600 seemingly harmful but benign prompts across nine safety-related categories, along with 1,785 genuinely harmful prompts (OVERT-unsafe) to evaluate the safety-utility trade-off. Using OVERT, we evaluate several leading T2I models and find that over-refusal is a widespread issue across various categories (Figure 1), underscoring the need for further research to enhance the safety alignment of T2I models without compromising their functionality.As a preliminary attempt to reduce over-refusal, we explore prompt rewriting; however, we find it often compromises faithfulness to the meaning of the original prompts. Finally, we demonstrate the flexibility of our generation framework in accommodating diverse safety requirements by generating customized evaluation data adapting to user-defined policies.