🤖 AI Summary
Existing style transfer methods overlook “signature styles”—distinctive visual characteristics such as recognizable geometric structures, brushwork patterns, and color palettes inherent to individual artists or domains. This paper introduces the first framework explicitly designed for modeling and transferring signature styles. Our approach comprises three key components: (1) a style-embedding hypernetwork that enables single-image-driven fine-tuning of diffusion models; (2) a temporal-aware attention swapping mechanism to preserve structural consistency of content across diffusion timesteps; and (3) a style token reconstruction module supporting localized style transfer, texture transfer, multi-style fusion, and text-guided generation. Extensive qualitative and quantitative evaluations demonstrate state-of-the-art performance in both signature style recognition accuracy and transfer fidelity—significantly outperforming prior methods. Furthermore, our framework generalizes effectively to diverse downstream applications, including artistic stylization, domain adaptation, and controllable image synthesis.
📝 Abstract
Style transfer enables the seamless integration of artistic styles from a style image into a content image, resulting in visually striking and aesthetically enriched outputs. Despite numerous advances in this field, existing methods did not explicitly focus on the signature style, which represents the distinct and recognizable visual traits of the image such as geometric and structural patterns, color palettes and brush strokes etc. In this paper, we introduce SigStyle, a framework that leverages the semantic priors that embedded in a personalized text-to-image diffusion model to capture the signature style representation. This style capture process is powered by a hypernetwork that efficiently fine-tunes the diffusion model for any given single style image. Style transfer then is conceptualized as the reconstruction process of content image through learned style tokens from the personalized diffusion model. Additionally, to ensure the content consistency throughout the style transfer process, we introduce a time-aware attention swapping technique that incorporates content information from the original image into the early denoising steps of target image generation. Beyond enabling high-quality signature style transfer across a wide range of styles, SigStyle supports multiple interesting applications, such as local style transfer, texture transfer, style fusion and style-guided text-to-image generation. Quantitative and qualitative evaluations demonstrate our approach outperforms existing style transfer methods for recognizing and transferring the signature styles.