POEM: Precise Object-level Editing via MLLM control

πŸ“… 2025-04-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing text-driven image editing methods struggle to simultaneously achieve accurate local shape/layout transformations and global consistency, while interactive image-based approaches, though precise, rely heavily on manual annotations and suffer from low efficiency. This paper proposes an MLLM-driven fine-grained object-level editing framework: it employs structured reasoning to automatically generate pre- and post-edit object masks, then leverages mask-guided diffusion models for precise text-to-image transformation. To support rigorous evaluation, we introduce VOCEditsβ€”the first benchmark dataset featuring ground-truth object masks and geometric transformation annotations. Experiments demonstrate that our method significantly outperforms state-of-the-art text-based editors in both localization and transformation accuracy, while reducing human interaction effort to approximately one-fifth of that required by interactive methods. To the best of our knowledge, this is the first approach to achieve high-precision, low-intervention object-level semantic editing.

Technology Category

Application Category

πŸ“ Abstract
Diffusion models have significantly improved text-to-image generation, producing high-quality, realistic images from textual descriptions. Beyond generation, object-level image editing remains a challenging problem, requiring precise modifications while preserving visual coherence. Existing text-based instructional editing methods struggle with localized shape and layout transformations, often introducing unintended global changes. Image interaction-based approaches offer better accuracy but require manual human effort to provide precise guidance. To reduce this manual effort while maintaining a high image editing accuracy, in this paper, we propose POEM, a framework for Precise Object-level Editing using Multimodal Large Language Models (MLLMs). POEM leverages MLLMs to analyze instructional prompts and generate precise object masks before and after transformation, enabling fine-grained control without extensive user input. This structured reasoning stage guides the diffusion-based editing process, ensuring accurate object localization and transformation. To evaluate our approach, we introduce VOCEdits, a benchmark dataset based on PASCAL VOC 2012, augmented with instructional edit prompts, ground-truth transformations, and precise object masks. Experimental results show that POEM outperforms existing text-based image editing approaches in precision and reliability while reducing manual effort compared to interaction-based methods.
Problem

Research questions and friction points this paper is trying to address.

Enables precise object-level image editing via MLLM control
Reduces manual effort while maintaining high editing accuracy
Addresses challenges in localized shape and layout transformations
Innovation

Methods, ideas, or system contributions that make the work stand out.

MLLM for precise object mask generation
Structured reasoning guides diffusion editing
VOCEdits benchmark for evaluation
πŸ”Ž Similar Papers
No similar papers found.