Paint by Inpaint: Learning to Add Image Objects by Removing Them First

📅 2024-04-28
🏛️ arXiv.org
📈 Citations: 12
Influential: 3
📄 PDF
🤖 AI Summary
This work addresses the challenge of *mask-free, natural object addition* in text-driven image editing. We propose a “remove-then-add” paradigm: leveraging large-scale automatically generated object-removal image pairs to train diffusion models to invert inpainting—enabling precise text-to-painting. Our key contributions are: (1) the first deletion-guided framework for object addition; (2) the first large-scale editing dataset with naturally consistent source–target image pairs; and (3) a VLM–LLM fusion pipeline for generating high-quality, diverse natural language editing instructions. The method integrates diffusion modeling, text-conditional generation, semantic segmentation–guided inpainting, and an automated image-pairing and filtering pipeline. Experiments demonstrate state-of-the-art performance on both object addition and general image editing tasks, with significant quantitative improvements and qualitatively superior results in semantic coherence, visual fidelity, and instruction adherence.

Technology Category

Application Category

📝 Abstract
Image editing has advanced significantly with the introduction of text-conditioned diffusion models. Despite this progress, seamlessly adding objects to images based on textual instructions without requiring user-provided input masks remains a challenge. We address this by leveraging the insight that removing objects (Inpaint) is significantly simpler than its inverse process of adding them (Paint), attributed to inpainting models that benefit from segmentation mask guidance. Capitalizing on this realization, by implementing an automated and extensive pipeline, we curate a filtered large-scale image dataset containing pairs of images and their corresponding object-removed versions. Using these pairs, we train a diffusion model to inverse the inpainting process, effectively adding objects into images. Unlike other editing datasets, ours features natural target images instead of synthetic ones while ensuring source-target consistency by construction. Additionally, we utilize a large Vision-Language Model to provide detailed descriptions of the removed objects and a Large Language Model to convert these descriptions into diverse, natural-language instructions. Our quantitative and qualitative results show that the trained model surpasses existing models in both object addition and general editing tasks. Visit our project page for the released dataset and trained models: https://rotsteinnoam.github.io/Paint-by-Inpaint.
Problem

Research questions and friction points this paper is trying to address.

Seamlessly add objects to images using text instructions without masks.
Train diffusion model to invert inpainting for object addition.
Generate natural target images with source-target consistency.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated pipeline for image dataset curation
Diffusion model trained to invert inpainting process
Vision-Language Model for detailed object descriptions
🔎 Similar Papers
No similar papers found.