StyleKeeper: Prevent Content Leakage using Negative Visual Query Guidance

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In text-to-image generation, visual style prompts often cause content leakage—unrelated semantic content erroneously migrating alongside style. To address this, we propose Negative Visual Query Guidance (NVQG), an extension of classifier-free guidance that injects negative visual queries into the self-attention layers of diffusion models. By explicitly swapping query vectors, NVQG models and suppresses leakage scenarios, thereby decoupling style and content representations. Leveraging real images as style references, NVQG jointly optimizes with negative score guidance in an end-to-end manner. Experiments demonstrate that NVQG significantly reduces content leakage across diverse styles and textual prompts, achieving precise style replication while strictly adhering to textual semantics. It outperforms existing visual-guidance methods in both fidelity and controllability.

Technology Category

Application Category

📝 Abstract
In the domain of text-to-image generation, diffusion models have emerged as powerful tools. Recently, studies on visual prompting, where images are used as prompts, have enabled more precise control over style and content. However, existing methods often suffer from content leakage, where undesired elements of the visual style prompt are transferred along with the intended style. To address this issue, we 1) extend classifier-free guidance (CFG) to utilize swapping self-attention and propose 2) negative visual query guidance (NVQG) to reduce the transfer of unwanted contents. NVQG employs negative score by intentionally simulating content leakage scenarios that swap queries instead of key and values of self-attention layers from visual style prompts. This simple yet effective method significantly reduces content leakage. Furthermore, we provide careful solutions for using a real image as visual style prompts. Through extensive evaluation across various styles and text prompts, our method demonstrates superiority over existing approaches, reflecting the style of the references, and ensuring that resulting images match the text prompts. Our code is available href{https://github.com/naver-ai/StyleKeeper}{here}.
Problem

Research questions and friction points this paper is trying to address.

Preventing content leakage in text-to-image diffusion models
Reducing unwanted element transfer from visual style prompts
Enhancing style control while maintaining text prompt alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends classifier-free guidance with swapping self-attention
Proposes negative visual query guidance to prevent leakage
Simulates content leakage scenarios by swapping attention queries
🔎 Similar Papers
No similar papers found.