π€ AI Summary
Existing text-driven image editing methods struggle to simultaneously achieve accurate local shape/layout transformations and global consistency, while interactive image-based approaches, though precise, rely heavily on manual annotations and suffer from low efficiency. This paper proposes an MLLM-driven fine-grained object-level editing framework: it employs structured reasoning to automatically generate pre- and post-edit object masks, then leverages mask-guided diffusion models for precise text-to-image transformation. To support rigorous evaluation, we introduce VOCEditsβthe first benchmark dataset featuring ground-truth object masks and geometric transformation annotations. Experiments demonstrate that our method significantly outperforms state-of-the-art text-based editors in both localization and transformation accuracy, while reducing human interaction effort to approximately one-fifth of that required by interactive methods. To the best of our knowledge, this is the first approach to achieve high-precision, low-intervention object-level semantic editing.
π Abstract
Diffusion models have significantly improved text-to-image generation, producing high-quality, realistic images from textual descriptions. Beyond generation, object-level image editing remains a challenging problem, requiring precise modifications while preserving visual coherence. Existing text-based instructional editing methods struggle with localized shape and layout transformations, often introducing unintended global changes. Image interaction-based approaches offer better accuracy but require manual human effort to provide precise guidance. To reduce this manual effort while maintaining a high image editing accuracy, in this paper, we propose POEM, a framework for Precise Object-level Editing using Multimodal Large Language Models (MLLMs). POEM leverages MLLMs to analyze instructional prompts and generate precise object masks before and after transformation, enabling fine-grained control without extensive user input. This structured reasoning stage guides the diffusion-based editing process, ensuring accurate object localization and transformation. To evaluate our approach, we introduce VOCEdits, a benchmark dataset based on PASCAL VOC 2012, augmented with instructional edit prompts, ground-truth transformations, and precise object masks. Experimental results show that POEM outperforms existing text-based image editing approaches in precision and reliability while reducing manual effort compared to interaction-based methods.