🤖 AI Summary
Existing image editing models struggle to model coherent intermediate logical pathways from initial to target states in complex dynamic scenes, often lacking an understanding of procedural and causal relationships. To address this gap, this work proposes the first systematic benchmark specifically designed to evaluate the reasoning capability of image editing models in generating intermediate logical paths. The benchmark encompasses four task categories: state transitions, dynamic processes, temporal sequences, and scientific simulations, and introduces fine-grained evaluation metrics including logical consistency, visual naturalness, and adherence to path constraints. Leveraging a meticulously annotated test set and multidimensional automatic and human evaluations, we conduct a comprehensive assessment of 14 state-of-the-art models, revealing their common deficiencies in intermediate path generation and establishing a standardized platform to guide future research.
📝 Abstract
Multimodal generative models have made significant strides in image editing, demonstrating impressive performance on a variety of static tasks. However, their proficiency typically does not extend to complex scenarios requiring dynamic reasoning, leaving them ill-equipped to model the coherent, intermediate logical pathways that constitute a multi-step evolution from an initial state to a final one. This capacity is crucial for unlocking a deeper level of procedural and causal understanding in visual manipulation. To systematically measure this critical limitation, we introduce InEdit-Bench, the first evaluation benchmark dedicated to reasoning over intermediate pathways in image editing. InEdit-Bench comprises meticulously annotated test cases covering four fundamental task categories: state transition, dynamic process, temporal sequence, and scientific simulation. Additionally, to enable fine-grained evaluation, we propose a set of assessment criteria to evaluate the logical coherence and visual naturalness of the generated pathways, as well as the model's fidelity to specified path constraints. Our comprehensive evaluation of 14 representative image editing models on InEdit-Bench reveals significant and widespread shortcomings in this domain. By providing a standardized and challenging benchmark, we aim for InEdit-Bench to catalyze research and steer development towards more dynamic, reason-aware, and intelligent multimodal generative models.