An Item is Worth a Prompt: Versatile Image Editing with Disentangled Control

πŸ“… 2024-03-07
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 4
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing prompt-based image editing methods struggle with fine-grained control, often introducing distortions in unedited regions and yielding unnatural results; moreover, they cannot jointly edit both the image and its corresponding textual description. To address these limitations, we propose D-Editβ€”a novel framework that, for the first time, decouples image-prompt interaction into object-level bindable and learnable prompts, enabling mask-driven object-level editing. D-Edit unifies four editing paradigms: image editing, text editing, mask-guided editing, and object removal. Built upon a pre-trained diffusion model, it establishes precise object-prompt alignment via decoupled cross-attention mechanisms and a two-stage optimization strategy. Extensive qualitative and quantitative evaluations demonstrate that D-Edit achieves state-of-the-art performance across all four tasks, delivering high-fidelity edits and strong generalization to unseen objects and prompts.

Technology Category

Application Category

πŸ“ Abstract
Building on the success of text-to-image diffusion models (DPMs), image editing is an important application to enable human interaction with AI-generated content. Among various editing methods, editing within the prompt space gains more attention due to its capacity and simplicity of controlling semantics. However, since diffusion models are commonly pretrained on descriptive text captions, direct editing of words in text prompts usually leads to completely different generated images, violating the requirements for image editing. On the other hand, existing editing methods usually consider introducing spatial masks to preserve the identity of unedited regions, which are usually ignored by DPMs and therefore lead to inharmonic editing results. Targeting these two challenges, in this work, we propose to disentangle the comprehensive image-prompt interaction into several item-prompt interactions, with each item linked to a special learned prompt. The resulting framework, named D-Edit, is based on pretrained diffusion models with cross-attention layers disentangled and adopts a two-step optimization to build item-prompt associations. Versatile image editing can then be applied to specific items by manipulating the corresponding prompts. We demonstrate state-of-the-art results in four types of editing operations including image-based, text-based, mask-based editing, and item removal, covering most types of editing applications, all within a single unified framework. Notably, D-Edit is the first framework that can (1) achieve item editing through mask editing and (2) combine image and text-based editing. We demonstrate the quality and versatility of the editing results for a diverse collection of images through both qualitative and quantitative evaluations.
Problem

Research questions and friction points this paper is trying to address.

Precise Control
Detail Preservation
Joint Editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive Image Editing
Decoupled Image Description
Simultaneous Image-Text Modification
πŸ”Ž Similar Papers
No similar papers found.
A
Aosong Feng
Yale University
Weikang Qiu
Weikang Qiu
PhD student, Yale University
machine learningNeuroscience
Jinbin Bai
Jinbin Bai
National University of Singapore
Machine LearningContent CreationGenerative Modeling
K
Kaicheng Zhou
Collov Labs
Z
Zhen Dong
Collov Labs
X
Xiao Zhang
Collov Labs
R
Rex Ying
Yale University
L
L. Tassiulas
Yale University