Patronus: Safeguarding Text-to-Image Models against White-Box Adversaries

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the vulnerability of text-to-image (T2I) models to white-box attacks—where adversaries fine-tune models to bypass existing safety mechanisms and generate harmful content—this paper proposes Patronus, a robust defense framework. Methodologically, Patronus introduces (1) a learnable internal moderation module that decodes semantic features of harmful text inputs into zero vectors, enabling fine-grained input classification and precise blocking; and (2) a non-fine-tunable alignment training mechanism, achieved by freezing critical representation layers and incorporating robust supervision to prevent adversarial tampering with safety alignment. Extensive experiments demonstrate that Patronus achieves 100% blocking rate against diverse white-box attacks while preserving the original model’s fidelity, safety, and diversity in generating benign content.

Technology Category

Application Category

📝 Abstract
Text-to-image (T2I) models, though exhibiting remarkable creativity in image generation, can be exploited to produce unsafe images. Existing safety measures, e.g., content moderation or model alignment, fail in the presence of white-box adversaries who know and can adjust model parameters, e.g., by fine-tuning. This paper presents a novel defensive framework, named Patronus, which equips T2I models with holistic protection to defend against white-box adversaries. Specifically, we design an internal moderator that decodes unsafe input features into zero vectors while ensuring the decoding performance of benign input features. Furthermore, we strengthen the model alignment with a carefully designed non-fine-tunable learning mechanism, ensuring the T2I model will not be compromised by malicious fine-tuning. We conduct extensive experiments to validate the intactness of the performance on safe content generation and the effectiveness of rejecting unsafe content generation. Results also confirm the resilience of Patronus against various fine-tuning attacks by white-box adversaries.
Problem

Research questions and friction points this paper is trying to address.

Defending text-to-image models against white-box adversarial attacks
Preventing unsafe image generation through malicious fine-tuning
Ensuring model safety while maintaining benign content generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Internal moderator decodes unsafe features to zero vectors
Non-fine-tunable learning mechanism strengthens model alignment
Holistic protection framework defends against white-box adversaries
🔎 Similar Papers
No similar papers found.
X
Xinfeng Li
College of Computing and Data Science, Nanyang Technological University
S
Shengyuan Pang
College of Electrical Engineering and the Ubiquitous System Security Lab (USSLab), Zhejiang University, Hangzhou 310058, China
J
Jialin Wu
College of Electrical Engineering and the Ubiquitous System Security Lab (USSLab), Zhejiang University, Hangzhou 310058, China
Jiangyi Deng
Jiangyi Deng
College of Electrical Engineering and the Ubiquitous System Security Lab (USSLab), Zhejiang University, Hangzhou 310058, China
H
Huanlong Zhong
College of Electrical Engineering and the Ubiquitous System Security Lab (USSLab), Zhejiang University, Hangzhou 310058, China
Yanjiao Chen
Yanjiao Chen
College of Electrical Engineering, Zhejiang University
Wireless networksnetwork securityInternet of Things
J
Jie Zhang
ETH Zurich, D-Infk
Wenyuan Xu
Wenyuan Xu
Professor, IEEE Fellow, Zhejiang University, College of EE
Wireless Network SecurityEmbedded System SecurityAnalog Cyber SecurityIoT Security