🤖 AI Summary
Existing instruction-driven image editing methods suffer from limitations in comprehending complex scenes, preserving semantic consistency, and enabling fine-grained control. To address these challenges, we propose a Region-Aware Vision-Language Model (Region-Aware VLM) framework. Our approach introduces a novel region-token-enhanced VLM architecture, integrated with temporal-embedding-guided diffusion, a time-aware target injection module, and a hybrid visual cross-attention mechanism—enabling precise local edits while maintaining global semantic coherence. The framework significantly improves understanding of intricate spatial relationships and dynamic editing instructions. Extensive evaluations on multiple benchmarks demonstrate state-of-the-art performance, particularly in local edit accuracy, semantic consistency between source and edited images, and faithful adherence to fine-grained editing directives.
📝 Abstract
Currently, instruction-based image editing methods have made significant progress by leveraging the powerful cross-modal understanding capabilities of vision language models (VLMs). However, they still face challenges in three key areas: 1) complex scenarios; 2) semantic consistency; and 3) fine-grained editing. To address these issues, we propose FireEdit, an innovative Fine-grained Instruction-based image editing framework that exploits a REgion-aware VLM. FireEdit is designed to accurately comprehend user instructions and ensure effective control over the editing process. Specifically, we enhance the fine-grained visual perception capabilities of the VLM by introducing additional region tokens. Relying solely on the output of the LLM to guide the diffusion model may lead to suboptimal editing results. Therefore, we propose a Time-Aware Target Injection module and a Hybrid Visual Cross Attention module. The former dynamically adjusts the guidance strength at various denoising stages by integrating timestep embeddings with the text embeddings. The latter enhances visual details for image editing, thereby preserving semantic consistency between the edited result and the source image. By combining the VLM enhanced with fine-grained region tokens and the time-dependent diffusion model, FireEdit demonstrates significant advantages in comprehending editing instructions and maintaining high semantic consistency. Extensive experiments indicate that our approach surpasses the state-of-the-art instruction-based image editing methods. Our project is available at https://zjgans.github.io/fireedit.github.io.