🤖 AI Summary
Existing text-guided image editing models struggle with multi-step, chain-dependent instructions and lack dedicated evaluation benchmarks. This paper introduces the first benchmark specifically designed for chain-dependent editing instructions. Methodologically, it proposes three key innovations: (1) a chain-dependency evaluation framework and a novel mask-based visual consistency metric that emphasizes fidelity in non-edited regions; (2) an instruction composition modeling approach and systematic chain-task construction methodology; and (3) a lightweight Chain-of-Thought (CoT) guidance strategy enabling explicit reasoning control across editing steps. Experiments demonstrate that the proposed benchmark effectively discriminates performance disparities among state-of-the-art models on complex editing tasks. Moreover, the CoT strategy significantly improves both editing accuracy and cross-step consistency. The code and dataset are publicly released.
📝 Abstract
Text-driven image editing has achieved remarkable success in following single instructions. However, real-world scenarios often involve complex, multi-step instructions, particularly ``chain'' instructions where operations are interdependent. Current models struggle with these intricate directives, and existing benchmarks inadequately evaluate such capabilities. Specifically, they often overlook multi-instruction and chain-instruction complexities, and common consistency metrics are flawed. To address this, we introduce ComplexBench-Edit, a novel benchmark designed to systematically assess model performance on complex, multi-instruction, and chain-dependent image editing tasks. ComplexBench-Edit also features a new vision consistency evaluation method that accurately assesses non-modified regions by excluding edited areas. Furthermore, we propose a simple yet powerful Chain-of-Thought (CoT)-based approach that significantly enhances the ability of existing models to follow complex instructions. Our extensive experiments demonstrate ComplexBench-Edit's efficacy in differentiating model capabilities and highlight the superior performance of our CoT-based method in handling complex edits. The data and code are released at https://github.com/llllly26/ComplexBench-Edit.