🤖 AI Summary
This paper identifies an attribute misbinding vulnerability in identity-preserving generative models, arising from internal attention biases that erroneously associate sensitive (e.g., NSFW) attributes with target identities. We introduce the Attribute Misbinding Attack (AMA), wherein adversaries craft benign-looking prompts to bypass textual safety filters and trigger unintended attribute binding. To this end, we propose the first formal AMA framework; construct the Misbinding Prompt Benchmark—covering four critical risk dimensions; and design the Attribute Binding Safety Score (ABSS), a novel metric jointly quantifying content safety and identity fidelity. Experiments demonstrate that our attack achieves an average 5.28% higher filter evasion rate across five state-of-the-art text moderation systems—including GPT-4o—and induces the highest NSFW generation rate. ABSS is empirically validated to effectively balance safety and identity consistency.
📝 Abstract
Identity-preserving models have led to notable progress in generating personalized content. Unfortunately, such models also exacerbate risks when misused, for instance, by generating threatening content targeting specific individuals. This paper introduces the extbf{Attribute Misbinding Attack}, a novel method that poses a threat to identity-preserving models by inducing them to produce Not-Safe-For-Work (NSFW) content. The attack's core idea involves crafting benign-looking textual prompts to circumvent text-filter safeguards and leverage a key model vulnerability: flawed attribute binding that stems from its internal attention bias. This results in misattributing harmful descriptions to a target identity and generating NSFW outputs. To facilitate the study of this attack, we present the extbf{Misbinding Prompt} evaluation set, which examines the content generation risks of current state-of-the-art identity-preserving models across four risk dimensions: pornography, violence, discrimination, and illegality. Additionally, we introduce the extbf{Attribute Binding Safety Score (ABSS)}, a metric for concurrently assessing both content fidelity and safety compliance. Experimental results show that our Misbinding Prompt evaluation set achieves a extbf{5.28}% higher success rate in bypassing five leading text filters (including GPT-4o) compared to existing main-stream evaluation sets, while also demonstrating the highest proportion of NSFW content generation. The proposed ABSS metric enables a more comprehensive evaluation of identity-preserving models by concurrently assessing both content fidelity and safety compliance.