🤖 AI Summary
This paper addresses the challenge of zero-shot, region-specific image editing with diffusion models: precisely inserting a target object into a user-specified rectangular region while preserving out-of-region content at the pixel level and ensuring natural boundary blending. Methodologically, it introduces three novel techniques: (1) embedding-level concatenation for text-spatial alignment; (2) object-driven latent-space and attention injection for layout-controllable editing; and (3) attention refocusing combined with expanded prompt-guided inpainting to enhance local fidelity. Leveraging only a pre-trained text-to-image diffusion model, the method performs high-quality editing via forward inference—without any fine-tuning or training. Experiments demonstrate that it achieves semantic precision within the bounding box, exact preservation outside it, and seamless cross-boundary integration without artifacts—outperforming existing zero-shot editing approaches in fidelity and controllability.
📝 Abstract
We introduce ObjectAdd, a training-free diffusion modification method to add user-expected objects into user-specified area. The motive of ObjectAdd stems from: first, describing everything in one prompt can be difficult, and second, users often need to add objects into the generated image. To accommodate with real world, our ObjectAdd maintains accurate image consistency after adding objects with technical innovations in: (1) embedding-level concatenation to ensure correct text embedding coalesce; (2) object-driven layout control with latent and attention injection to ensure objects accessing user-specified area; (3) prompted image inpainting in an attention refocusing&object expansion fashion to ensure rest of the image stays the same. With a text-prompted image, our ObjectAdd allows users to specify a box and an object, and achieves: (1) adding object inside the box area; (2) exact content outside the box area; (3) flawless fusion between the two areas