CleanStyle: Plug-and-Play Style Conditioning Purification for Text-to-Image Stylization

๐Ÿ“… 2026-02-24
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the issue of semantic content leakage from style reference images in encoder-based diffusion models for text-to-image stylization, which often compromises prompt fidelity and stylistic consistency. To mitigate this without requiring model retraining, the authors propose a plug-and-play style-condition purification framework. The core innovation lies in identifying trailing components in style embeddingsโ€”via singular value decomposition (SVD)โ€”as the primary source of content leakage, and introducing two key mechanisms: a dynamic suppression strategy (CS-SVD) and a timestep-aware, style-specific classifier-free guidance (SS-CFG). This approach significantly enhances style consistency, visual quality, and text-image alignment across diverse style and prompt combinations.

Technology Category

Application Category

๐Ÿ“ Abstract
Style transfer in diffusion models enables controllable visual generation by injecting the style of a reference image. However, recent encoder-based methods, while efficient and tuning-free, often suffer from content leakage, where semantic elements from the style image undesirably appear in the output, impairing prompt fidelity and stylistic consistency. In this work, we introduce CleanStyle, a plug-and-play framework that filters out content-related noise from the style embedding without retraining. Motivated by empirical analysis, we observe that such leakage predominantly stems from the tail components of the style embedding, which are isolated via Singular Value Decomposition (SVD). To address this, we propose CleanStyleSVD (CS-SVD), which dynamically suppresses tail components using a time-aware exponential schedule, providing clean, style-preserving conditional embeddings throughout the denoising process. Furthermore, we present Style-Specific Classifier-Free Guidance (SS-CFG), which reuses the suppressed tail components to construct style-aware unconditional inputs. Unlike conventional methods that use generic negative embeddings (e.g., zero vectors), SS-CFG introduces targeted negative signals that reflect style-specific but prompt-irrelevant visual elements. This enables the model to effectively suppress these distracting patterns during generation, thereby improving prompt fidelity and enhancing the overall visual quality of stylized outputs. Our approach is lightweight, interpretable, and can be seamlessly integrated into existing encoder-based diffusion models without retraining. Extensive experiments demonstrate that CleanStyle substantially reduces content leakage, improves stylization quality and improves prompt alignment across a wide range of style references and prompts.
Problem

Research questions and friction points this paper is trying to address.

content leakage
text-to-image stylization
style transfer
diffusion models
prompt fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

content leakage
style transfer
diffusion models
Singular Value Decomposition
classifier-free guidance
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xiaoman Feng
AGI Lab, Westlake University
Mingkun Lei
Mingkun Lei
Westlake University
image and video synthesis
Y
Yang Wang
AGI Lab, Westlake University
D
Dingwen Fu
AGI Lab, Westlake University
Chi Zhang
Chi Zhang
Director of AGI Lab, Westlake University
Artificial Intelligence