🤖 AI Summary
Existing image style transfer methods suffer from three key limitations: inaccurate style matching, severe content leakage, and ineffective utilization of multiple style references. To address these issues, we propose a multi-style transfer framework built upon latent diffusion models. First, representative attention features are extracted from multiple style images via clustering. Second, an image prompt adapter is designed to inject style priors simultaneously into both cross-attention and self-attention layers during the denoising process. Third, a feature-statistics alignment mechanism is introduced to explicitly disentangle content and style representations. Our method achieves state-of-the-art performance across multiple benchmarks, significantly improving style fidelity and suppressing content leakage. Notably, it enables end-to-end joint modeling and fusion of multiple style references for the first time.
📝 Abstract
Recent advances in latent diffusion models have enabled exciting progress in image style transfer. However, several key issues remain. For example, existing methods still struggle to accurately match styles. They are often limited in the number of style images that can be used. Furthermore, they tend to entangle content and style in undesired ways. To address this, we propose leveraging multiple style images which helps better represent style features and prevent content leaking from the style images. We design a method that leverages both image prompt adapters and statistical alignment of the features during the denoising process. With this, our approach is designed such that it can intervene both at the cross-attention and the self-attention layers of the denoising UNet. For the statistical alignment, we employ clustering to distill a small representative set of attention features from the large number of attention values extracted from the style samples. As demonstrated in our experimental section, the resulting method achieves state-of-the-art results for stylization.