🤖 AI Summary
To address limitations of conventional style transfer methods in semantic alignment, content fidelity, and color controllability, this paper proposes StyleWallfacer—a unified framework enabling end-to-end integration of high-fidelity image-driven style transfer and text-driven stylization. Methodologically, it introduces a semantics-gap-driven style injection mechanism, human-feedback-enhanced data augmentation, and a training-free triple-diffusion strategy—marking the first instance of simultaneous fine-grained color editing within style transfer. Technically, it synergistically combines BLIP/CLIP-based multimodal alignment, large language model–guided semantic purification, self-attention feature remapping, and query-preservation mechanisms. Experiments demonstrate substantial suppression of style drift and content distortion, achieving state-of-the-art performance across multiple benchmarks. The framework supports artist-level stylistic expression and cross-domain color regeneration while preserving structural integrity and semantic coherence.
📝 Abstract
In this pioneering study, we introduce StyleWallfacer, a groundbreaking unified training and inference framework, which not only addresses various issues encountered in the style transfer process of traditional methods but also unifies the framework for different tasks. This framework is designed to revolutionize the field by enabling artist level style transfer and text driven stylization. First, we propose a semantic-based style injection method that uses BLIP to generate text descriptions strictly aligned with the semantics of the style image in CLIP space. By leveraging a large language model to remove style-related descriptions from these descriptions, we create a semantic gap. This gap is then used to fine-tune the model, enabling efficient and drift-free injection of style knowledge. Second, we propose a data augmentation strategy based on human feedback, incorporating high-quality samples generated early in the fine-tuning process into the training set to facilitate progressive learning and significantly reduce its overfitting. Finally, we design a training-free triple diffusion process using the fine-tuned model, which manipulates the features of self-attention layers in a manner similar to the cross-attention mechanism. Specifically, in the generation process, the key and value of the content-related process are replaced with those of the style-related process to inject style while maintaining text control over the model. We also introduce query preservation to mitigate disruptions to the original content. Under such a design, we have achieved high-quality image-driven style transfer and text-driven stylization, delivering artist-level style transfer results while preserving the original image content. Moreover, we achieve image color editing during the style transfer process for the first time.