🤖 AI Summary
Existing sequential approaches for multi-natural-language-instruction-guided image editing suffer from error accumulation and instruction conflicts, leading to incomplete edits. To address this, we propose the Instruction Influence Decoupling (IID) framework—the first method enabling parallel multi-instruction guidance within a single diffusion denoising step. IID leverages self-attention to generate instruction-specific spatial masks, precisely localizing and decoupling the influence regions of individual instructions. It then integrates localized editing guidance with instruction-aware mask modeling, jointly regulating multi-instruction semantics within a Diffusion Transformer (DiT) architecture. Experiments demonstrate that IID significantly reduces required diffusion steps while achieving superior edit fidelity and instruction completion rates compared to both U-Net- and DiT-based baselines across multiple benchmarks. This work effectively resolves the core challenge of coordinated multi-instruction editing in diffusion-based image generation.
📝 Abstract
Instruction-guided image editing enables users to specify modifications using natural language, offering more flexibility and control. Among existing frameworks, Diffusion Transformers (DiTs) outperform U-Net-based diffusion models in scalability and performance. However, while real-world scenarios often require concurrent execution of multiple instructions, step-by-step editing suffers from accumulated errors and degraded quality, and integrating multiple instructions with a single prompt usually results in incomplete edits due to instruction conflicts. We propose Instruction Influence Disentanglement (IID), a novel framework enabling parallel execution of multiple instructions in a single denoising process, designed for DiT-based models. By analyzing self-attention mechanisms in DiTs, we identify distinctive attention patterns in multi-instruction settings and derive instruction-specific attention masks to disentangle each instruction's influence. These masks guide the editing process to ensure localized modifications while preserving consistency in non-edited regions. Extensive experiments on open-source and custom datasets demonstrate that IID reduces diffusion steps while improving fidelity and instruction completion compared to existing baselines. The codes will be publicly released upon the acceptance of the paper.