๐ค AI Summary
This work addresses the challenge of achieving precise example-based stylization while preserving both semantic fidelity and stylistic authenticityโa balance often compromised by existing methods that rely on task-specific retraining or costly inverse mapping. The authors reformulate the problem as a zero-shot contextual learning task, leveraging a pre-trained ReFlow inpainting model by directly concatenating a reference style image with a masked target image to jointly embed semantic content and visual style. Central to their approach is the Dynamic Semantic-Style Integration (DSSI) mechanism, which adaptively reweights attention contributions from textual and visual guidance to mitigate multimodal conflicts. Requiring no additional training, the proposed method significantly outperforms current state-of-the-art techniques in both semantic-style balance and overall generation quality.
๐ Abstract
Text-guided image generation has advanced rapidly with large-scale diffusion models, yet achieving precise stylization with visual exemplars remains difficult. Existing approaches often depend on task-specific retraining or expensive inversion procedures, which can compromise content integrity, reduce style fidelity, and lead to an unsatisfactory trade-off between semantic prompt adherence and style alignment. In this work, we introduce a training-free framework that reformulates style-guided synthesis as an in-context learning task. Guided by textual semantic prompts, our method concatenates a reference style image with a masked target image, leveraging a pretrained ReFlow-based inpainting model to seamlessly integrate semantic content with the desired style through multimodal attention fusion. We further analyze the imbalance and noise sensitivity inherent in multimodal attention fusion and propose a Dynamic Semantic-Style Integration (DSSI) mechanism that reweights attention between textual semantic and style visual tokens, effectively resolving guidance conflicts and enhancing output coherence. Experiments show that our approach achieves high-fidelity stylization with superior semantic-style balance and visual quality, offering a simple yet powerful alternative to complex, artifact-prone prior methods.