ObjectAdd: Adding Objects into Image via a Training-Free Diffusion Modification Fashion

📅 2024-04-26
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of zero-shot, region-specific image editing with diffusion models: precisely inserting a target object into a user-specified rectangular region while preserving out-of-region content at the pixel level and ensuring natural boundary blending. Methodologically, it introduces three novel techniques: (1) embedding-level concatenation for text-spatial alignment; (2) object-driven latent-space and attention injection for layout-controllable editing; and (3) attention refocusing combined with expanded prompt-guided inpainting to enhance local fidelity. Leveraging only a pre-trained text-to-image diffusion model, the method performs high-quality editing via forward inference—without any fine-tuning or training. Experiments demonstrate that it achieves semantic precision within the bounding box, exact preservation outside it, and seamless cross-boundary integration without artifacts—outperforming existing zero-shot editing approaches in fidelity and controllability.

Technology Category

Application Category

📝 Abstract
We introduce ObjectAdd, a training-free diffusion modification method to add user-expected objects into user-specified area. The motive of ObjectAdd stems from: first, describing everything in one prompt can be difficult, and second, users often need to add objects into the generated image. To accommodate with real world, our ObjectAdd maintains accurate image consistency after adding objects with technical innovations in: (1) embedding-level concatenation to ensure correct text embedding coalesce; (2) object-driven layout control with latent and attention injection to ensure objects accessing user-specified area; (3) prompted image inpainting in an attention refocusing&object expansion fashion to ensure rest of the image stays the same. With a text-prompted image, our ObjectAdd allows users to specify a box and an object, and achieves: (1) adding object inside the box area; (2) exact content outside the box area; (3) flawless fusion between the two areas
Problem

Research questions and friction points this paper is trying to address.

Adding objects into images without training
Maintaining image consistency after object insertion
Precise object placement in user-specified areas
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding-level concatenation for text embedding coalesce
Object-driven layout control with latent injection
Attention refocusing for seamless image inpainting
🔎 Similar Papers
No similar papers found.
Z
Ziyue Zhang
Xiamen University
Mingbao Lin
Mingbao Lin
Principal Research Scientist, Rakuten
Model Compression(Multimodal) LLMsDiffusion Models
R
Rongrong Ji
Xiamen University