PFB-Diff: Progressive Feature Blending Diffusion for Text-driven Image Editing

📅 2023-06-28
🏛️ Neural Networks
📈 Citations: 17
Influential: 1
📄 PDF
🤖 AI Summary
Existing diffusion-based local editing methods suffer from semantic inconsistency and artifact generation due to direct mixing between noisy targets and latent variables. To address this, we propose a fine-tuning-free, text-driven latent-space editing framework. First, we design a progressive cross-level feature fusion mechanism that achieves semantic alignment across multi-scale U-Net features. Second, we introduce a dynamic attention masking strategy that enables precise token-level spatial localization and control of edit regions, significantly improving background editing and multi-object replacement capabilities. Third, we integrate adaptive noise scheduling with semantic-guided feature fusion to enhance coherence and fidelity. Our method achieves state-of-the-art performance across diverse local editing benchmarks, outperforming prior approaches in both editing accuracy and visual quality. The framework is plug-and-play, requiring no model adaptation or retraining.
📝 Abstract
Diffusion models have demonstrated their ability to generate diverse and high-quality images, sparking considerable interest in their potential for real image editing applications. However, existing diffusion-based approaches for local image editing often suffer from undesired artifacts due to the latent-level blending of the noised target images and diffusion latent variables, which lack the necessary semantics for maintaining image consistency. To address these issues, we propose PFB-Diff, a Progressive Feature Blending method for Diffusion-based image editing. Unlike previous methods, PFB-Diff seamlessly integrates text-guided generated content into the target image through multi-level feature blending. The rich semantics encoded in deep features and the progressive blending scheme from high to low levels ensure semantic coherence and high quality in edited images. Additionally, we introduce an attention masking mechanism in the cross-attention layers to confine the impact of specific words to desired regions, further improving the performance of background editing and multi-object replacement. PFB-Diff can effectively address various editing tasks, including object/background replacement and object attribute editing. Our method demonstrates its superior performance in terms of editing accuracy and image quality without the need for fine-tuning or training. Our implementation is available at https://github.com/CMACH508/PFB-Diff.
Problem

Research questions and friction points this paper is trying to address.

Reduces artifacts in diffusion-based local image editing
Ensures semantic coherence via progressive feature blending
Improves background and multi-object editing precision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive multi-level feature blending for editing
Attention masking for precise region control
No fine-tuning required for high-quality results
🔎 Similar Papers
No similar papers found.
Wenjing Huang
Wenjing Huang
RAND Corporation
PsychometricsStructural Equation ModelingItem Response TheoryCyber Security
S
Shikui Tu
Shanghai Jiao Tong University
L
Lei Xu
Shanghai Jiao Tong University