Implicit Bias Injection Attacks against Text-to-Image Diffusion Models

📅 2025-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a class of implicit biases in text-to-image diffusion models—semantic-permeative yet visually featureless—characterized by stealthy manifestation, scene adaptivity, and susceptibility to misuse. To address this, we propose IBI-Attacks: a zero-shot, plug-and-play framework that, for the first time, models a universal implicit bias direction directly in the prompt embedding space. It enables cross-semantic-scene bias injection via adaptive vector projection, requiring neither model fine-tuning nor modification of user prompts. The method ensures high stealthiness, strong transferability across domains, and semantic fidelity. Evaluated on mainstream models including Stable Diffusion, IBI-Attacks achieves high attack success rates and demonstrates robust cross-model generalization. Our approach establishes a novel paradigm for detecting, assessing, and mitigating implicit biases in generative AI systems.

Technology Category

Application Category

📝 Abstract
The proliferation of text-to-image diffusion models (T2I DMs) has led to an increased presence of AI-generated images in daily life. However, biased T2I models can generate content with specific tendencies, potentially influencing people's perceptions. Intentional exploitation of these biases risks conveying misleading information to the public. Current research on bias primarily addresses explicit biases with recognizable visual patterns, such as skin color and gender. This paper introduces a novel form of implicit bias that lacks explicit visual features but can manifest in diverse ways across various semantic contexts. This subtle and versatile nature makes this bias challenging to detect, easy to propagate, and adaptable to a wide range of scenarios. We further propose an implicit bias injection attack framework (IBI-Attacks) against T2I diffusion models by precomputing a general bias direction in the prompt embedding space and adaptively adjusting it based on different inputs. Our attack module can be seamlessly integrated into pre-trained diffusion models in a plug-and-play manner without direct manipulation of user input or model retraining. Extensive experiments validate the effectiveness of our scheme in introducing bias through subtle and diverse modifications while preserving the original semantics. The strong concealment and transferability of our attack across various scenarios further underscore the significance of our approach. Code is available at https://github.com/Hannah1102/IBI-attacks.
Problem

Research questions and friction points this paper is trying to address.

Detecting subtle implicit biases in text-to-image models
Preventing misleading information from biased AI-generated images
Attacking models via adaptable implicit bias injection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Implicit bias injection in diffusion models
Plug-and-play attack without retraining
Adaptive bias direction in embedding space
H
Huayang Huang
School of Computer Science, Wuhan University
X
Xiangye Jin
School of Mathematics and Statistics, Wuhan University
Jiaxu Miao
Jiaxu Miao
Sun Yat-Sen University
Deep LearningVideo SegmentationFederated Learning
Yu Wu
Yu Wu
University of Cambridge
machine learninghealth sensingmobile health