🤖 AI Summary
Existing image style transfer methods struggle to preserve pixel-level semantic correspondence between content and style images. This work proposes a lightweight, training-free framework that, for the first time, explicitly models dense pixel-wise alignment within pre-trained latent diffusion models. By leveraging intermediate features extracted during the diffusion process, the method establishes fine-grained content-style correspondences and enforces a cycle-consistency constraint without any additional training overhead, thereby preserving both geometric structure and semantic details. The approach achieves superior performance in both visual quality and quantitative metrics compared to existing techniques that rely on extra training or annotations, enabling high-fidelity, fine-grained style transfer.
📝 Abstract
Transferring visual style between images while preserving semantic correspondence between similar objects remains a central challenge in computer vision. While existing methods have made great strides, most of them operate at global level but overlook region-wise and even pixel-wise semantic correspondence. To address this, we propose CoCoDiff, a novel training-free and low-cost style transfer framework that leverages pretrained latent diffusion models to achieve fine-grained, semantically consistent stylization. We identify that correspondence cues within generative diffusion models are under-explored and that content consistency across semantically matched regions is often neglected. CoCoDiff introduces a pixel-wise semantic correspondence module that mines intermediate diffusion features to construct a dense alignment map between content and style images. Furthermore, a cycle-consistency module then enforces structural and perceptual alignment across iterations, yielding object and region level stylization that preserves geometry and detail. Despite requiring no additional training or supervision, CoCoDiff delivers state-of-the-art visual quality and strong quantitative results, outperforming methods that rely on extra training or annotations.