FBSDiff: Plug-and-Play Frequency Band Substitution of Diffusion Features for Highly Controllable Text-Driven Image Translation

๐Ÿ“… 2024-08-02
๐Ÿ›๏ธ ACM Multimedia
๐Ÿ“ˆ Citations: 2
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Pretrained text-to-image diffusion models suffer from limited controllability and struggle to achieve precise, text-driven image editing. To address this, we propose a training-free, plug-and-play image-to-image translation method. Our core innovation is the first application of discrete cosine transform (DCT) to map intermediate diffusion features into the frequency domain, enabling a novel band-replacement layer that supports on-demand disentanglement and substitution of semantic, stylistic, and other guidance factorsโ€”thereby facilitating fine-grained, high-degree-of-freedom cross-modal editing. The method is architecture-agnostic and seamlessly integrates with mainstream diffusion models (e.g., Stable Diffusion) without fine-tuning. Extensive evaluations across multiple benchmarks demonstrate significant improvements in image quality, semantic fidelity, and editing controllability, while enabling real-time interactive editing. Our code and an interactive web demo are publicly released.

Technology Category

Application Category

๐Ÿ“ Abstract
Large-scale text-to-image diffusion models have been a revolutionary milestone in the evolution of generative AI, allowing wonderful image generation with natural-language text prompt. However, the issue of lacking controllability of such models restricts their practical applicability for real-life content creation. Thus, attention has been focused on leveraging a reference image to control text-to-image synthesis, which is also regarded as manipulating (or editing) a reference image as per a text prompt, namely, text-driven image-to-image translation. This paper contributes a novel, concise, and efficient approach that adapts pre-trained large-scale text-to-image (T2I) diffusion model to the image-to-image (I2I) paradigm in a plug-and-play manner, realizing high-quality and versatile text-driven I2I translation without model training, fine-tuning, or online optimization process. To guide T2I generation with a reference image, we propose to decompose diverse guiding factors with different frequency bands of diffusion features in the DCT spectral space, and accordingly devise a novel frequency band substitution layer which realizes dynamic control of the reference image to the T2I generation result in a plug-and-play manner. We demonstrate that our method allows flexible control over both guiding factor and guiding intensity of the reference image simply by tuning the type and bandwidth of the substituted frequency band, respectively. Extensive qualitative and quantitative experiments verify superiority of our approach over related methods in I2I translation visual quality, versatility, and controllability. Our project is publicly available at: https://xianggao1102.github.io/FBSDiff_webpage/.
Problem

Research questions and friction points this paper is trying to address.

Enhancing controllability of text-to-image diffusion models
Enabling plug-and-play text-driven image translation
Improving image quality and controllability via frequency band substitution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-and-play frequency band substitution layer
DCT spectral space decomposition for control
No training or fine-tuning required
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xiang Gao
Wangxuan Institute of Computer Technology, Peking University, Beijing, 100871 China
Jiaying Liu
Jiaying Liu
Dalian University of Technology
Graph LearningData ScienceComputational Social Science