CoCoDiff: Correspondence-Consistent Diffusion Model for Fine-grained Style Transfer

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing image style transfer methods struggle to preserve pixel-level semantic correspondence between content and style images. This work proposes a lightweight, training-free framework that, for the first time, explicitly models dense pixel-wise alignment within pre-trained latent diffusion models. By leveraging intermediate features extracted during the diffusion process, the method establishes fine-grained content-style correspondences and enforces a cycle-consistency constraint without any additional training overhead, thereby preserving both geometric structure and semantic details. The approach achieves superior performance in both visual quality and quantitative metrics compared to existing techniques that rely on extra training or annotations, enabling high-fidelity, fine-grained style transfer.

Technology Category

Application Category

📝 Abstract
Transferring visual style between images while preserving semantic correspondence between similar objects remains a central challenge in computer vision. While existing methods have made great strides, most of them operate at global level but overlook region-wise and even pixel-wise semantic correspondence. To address this, we propose CoCoDiff, a novel training-free and low-cost style transfer framework that leverages pretrained latent diffusion models to achieve fine-grained, semantically consistent stylization. We identify that correspondence cues within generative diffusion models are under-explored and that content consistency across semantically matched regions is often neglected. CoCoDiff introduces a pixel-wise semantic correspondence module that mines intermediate diffusion features to construct a dense alignment map between content and style images. Furthermore, a cycle-consistency module then enforces structural and perceptual alignment across iterations, yielding object and region level stylization that preserves geometry and detail. Despite requiring no additional training or supervision, CoCoDiff delivers state-of-the-art visual quality and strong quantitative results, outperforming methods that rely on extra training or annotations.
Problem

Research questions and friction points this paper is trying to address.

style transfer
semantic correspondence
fine-grained stylization
pixel-wise alignment
content consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

semantic correspondence
diffusion model
style transfer
cycle consistency
training-free
🔎 Similar Papers
No similar papers found.
W
Wenbo Nie
Institute of Information Science, Beijing Jiaotong University; Visual Intelligence + X International Joint Laboratory of the Ministry of Education
Zixiang Li
Zixiang Li
Beijing Jiaotong University
R
Renshuai Tao
Institute of Information Science, Beijing Jiaotong University; Visual Intelligence + X International Joint Laboratory of the Ministry of Education
B
Bin Wu
Institute of Information Science, Beijing Jiaotong University; Visual Intelligence + X International Joint Laboratory of the Ministry of Education
Yunchao Wei
Yunchao Wei
Professor, Beijing Jiaotong University, UTS, UIUC, NUS
Computer VisionMachine Learning
Y
Yao Zhao
Institute of Information Science, Beijing Jiaotong University; Visual Intelligence + X International Joint Laboratory of the Ministry of Education