🤖 AI Summary
Traditional image editing methods suffer from weak spatial reasoning, inaccurate region localization, and semantic inconsistency in complex scenes. This paper introduces the first mask-free, spatially aware image editing framework, enabling end-to-end, natural-language-instruction-driven editing. Our method integrates a multimodal large language model (MLLM) with a hypergraph neural network, optimized jointly under instruction guidance. Key contributions include: (1) a region-aware token and mask embedding joint representation for fine-grained spatial understanding; (2) an instruction-driven reasoning segmentation pipeline that eliminates manual mask input; and (3) a hypergraph-enhanced inpainting module that models cross-region structural dependencies to ensure global semantic consistency. Evaluated on the Reason-Edit benchmark, our approach achieves new state-of-the-art performance—improving segmentation accuracy by +12.6%, instruction adherence by +9.4%, and visual fidelity, while effectively mitigating local focusing bias.
📝 Abstract
Recent advancements in image editing have utilized large-scale multimodal models to enable intuitive, natural instruction-driven interactions. However, conventional methods still face significant challenges, particularly in spatial reasoning, precise region segmentation, and maintaining semantic consistency, especially in complex scenes. To overcome these challenges, we introduce SmartFreeEdit, a novel end-to-end framework that integrates a multimodal large language model (MLLM) with a hypergraph-enhanced inpainting architecture, enabling precise, mask-free image editing guided exclusively by natural language instructions. The key innovations of SmartFreeEdit include:(1)the introduction of region aware tokens and a mask embedding paradigm that enhance the spatial understanding of complex scenes;(2) a reasoning segmentation pipeline designed to optimize the generation of editing masks based on natural language instructions;and (3) a hypergraph-augmented inpainting module that ensures the preservation of both structural integrity and semantic coherence during complex edits, overcoming the limitations of local-based image generation. Extensive experiments on the Reason-Edit benchmark demonstrate that SmartFreeEdit surpasses current state-of-the-art methods across multiple evaluation metrics, including segmentation accuracy, instruction adherence, and visual quality preservation, while addressing the issue of local information focus and improving global consistency in the edited image. Our project will be available at https://github.com/smileformylove/SmartFreeEdit.