MSRAMIE: Multimodal Structured Reasoning Agent for Multi-instruction Image Editing

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation of existing image editing models when handling complex, multi-step, and interdependent instructions, primarily due to the scarcity of high-quality multi-instruction training data. The authors propose a training-free multimodal agent framework that constructs a structured reasoning topology using a Tree-of-States and a Graph-of-References to decompose intricate instructions into executable steps. This framework orchestrates iterative interactions between a multimodal large language model (MLLM)-driven planner and off-the-shelf image editing models. It enables state transitions, cross-step information aggregation, and backtracking to the original input, substantially enhancing edit controllability and interpretability. Experiments demonstrate that under high-complexity instructions, the method improves instruction-following accuracy by over 15% and more than doubles the success rate of completing all edits in a single attempt, while preserving excellent perceptual quality and visual consistency.

Technology Category

Application Category

📝 Abstract
Existing instruction-based image editing models perform well with simple, single-step instructions but degrade in realistic scenarios that involve multiple, lengthy, and interdependent directives. A main cause is the scarcity of training data with complex multi-instruction annotations. However, it is costly to collect such data and retrain these models. To address this challenge, we propose MSRAMIE, a training-free agent framework built on Multimodal Large Language Model (MLLM). MSRAMIE takes existing editing models as plug-in components and handle multi-instruction tasks via structured multimodal reasoning. It orchestrates iterative interactions between an MLLM-based Instructor and an image editing Actor, introducing a novel reasoning topology that comprises the proposed Tree-of-States and Graph-of-References. During inference, complex instructions are decomposed into multiple editing steps which enable state transitions, cross-step information aggregation, and original input recall, which enables systematic exploration of the image editing space and flexible progressive output refinement. The visualizable inference topology further provides interpretable and controllable decision pathways. Experiments show that as the instruction complexity increases, MSRAMIE can improve instruction following over 15% and increases the probability of finishing all modifications in a single run over 100%, while preserving perceptual quality and maintaining visual consistency.
Problem

Research questions and friction points this paper is trying to address.

image editing
multi-instruction
complex instructions
instruction following
multimodal reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Structured Reasoning
Tree-of-States
Graph-of-References
Training-Free Agent
Multi-instruction Image Editing
🔎 Similar Papers
No similar papers found.
Z
Zhaoyuan Qiu
University Of Melbourne, Grattan Street, Parkville, Victoria 3010, Australia
K
Ken Chen
University Of Melbourne, Grattan Street, Parkville, Victoria 3010, Australia
X
Xiangwei Wang
University Of Melbourne, Grattan Street, Parkville, Victoria 3010, Australia
Yu Xia
Yu Xia
Research Fellow, The University of Melbourne
machine learning
Sachith Seneviratne
Sachith Seneviratne
Research Fellow in Computer Vision, University Of Melbourne
Machine LearningComputer VisionNatural Language ProcessingUrban Informatics
S
Saman Halgamuge
University Of Melbourne, Grattan Street, Parkville, Victoria 3010, Australia