PurifyGen: A Risk-Discrimination and Semantic-Purification Model for Safe Text-to-Image Generation

📅 2025-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To mitigate unsafe content generation in diffusion-based text-to-image synthesis, existing safety approaches—relying on blacklists or classifiers—are vulnerable to adversarial bypassing and require extensive labeled data and additional training. This paper proposes a training-free, plug-and-play text prompt purification framework. It performs unsupervised, token-level risk assessment via complementary semantic distance metrics; constructs a dual-concept matrix encoding toxic and benign semantics; and applies zero-space projection coupled with range alignment for semantic remapping. Crucially, it enables fine-grained, dynamic embedding replacement while preserving the original model’s weights—balancing safety guarantees with semantic fidelity. Evaluated across five benchmark datasets, our method substantially reduces unsafe generation rates, outperforms all training-free baselines, and matches the performance of state-of-the-art methods requiring full model retraining.

Technology Category

Application Category

📝 Abstract
Recent advances in diffusion models have notably enhanced text-to-image (T2I) generation quality, but they also raise the risk of generating unsafe content. Traditional safety methods like text blacklisting or harmful content classification have significant drawbacks: they can be easily circumvented or require extensive datasets and extra training. To overcome these challenges, we introduce PurifyGen, a novel, training-free approach for safe T2I generation that retains the model's original weights. PurifyGen introduces a dual-stage strategy for prompt purification. First, we evaluate the safety of each token in a prompt by computing its complementary semantic distance, which measures the semantic proximity between the prompt tokens and concept embeddings from predefined toxic and clean lists. This enables fine-grained prompt classification without explicit keyword matching or retraining. Tokens closer to toxic concepts are flagged as risky. Second, for risky prompts, we apply a dual-space transformation: we project toxic-aligned embeddings into the null space of the toxic concept matrix, effectively removing harmful semantic components, and simultaneously align them into the range space of clean concepts. This dual alignment purifies risky prompts by both subtracting unsafe semantics and reinforcing safe ones, while retaining the original intent and coherence. We further define a token-wise strategy to selectively replace only risky token embeddings, ensuring minimal disruption to safe content. PurifyGen offers a plug-and-play solution with theoretical grounding and strong generalization to unseen prompts and models. Extensive testing shows that PurifyGen surpasses current methods in reducing unsafe content across five datasets and competes well with training-dependent approaches. The code can refer to https://github.com/AI-Researcher-Team/PurifyGen.
Problem

Research questions and friction points this paper is trying to address.

Prevents unsafe content generation in text-to-image models
Eliminates need for retraining or keyword blacklisting
Purifies risky prompts via semantic transformation and alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free dual-stage prompt purification for safe T2I generation
Token-level risk assessment via complementary semantic distance without retraining
Dual-space transformation removes toxic semantics and reinforces clean concepts
🔎 Similar Papers
Z
Zongsheng Cao
Researcher
Yangfan He
Yangfan He
University of Minnesota - Twin Cities
AI AgentReasoningAI AlignmentFoundation Models
A
Anran Liu
Researcher
J
Jun Xie
PCIE
F
Feng Chen
PCIE
Z
Zepeng Wang
PCIE