๐ค AI Summary
This work addresses key limitations in existing multimodal automated program repair methodsโnamely, rigid pipelines, insufficient fine-grained visual grounding, and poor reuse of past failure experiences. To overcome these challenges, we propose FailureMem, a novel framework that synergistically integrates a hybrid workflow-agent architecture, region-level active visual perception, and a failure memory bank to enable structured localization and flexible reasoning. By leveraging multimodal fusion, region-level visual grounding, and memory-based modeling of historical failures, FailureMem significantly enhances both exploratory capability and knowledge reuse during the repair process. Evaluated on the SWE-bench Multimodal benchmark, FailureMem achieves a 3.7% higher repair success rate compared to GUIRepair.
๐ Abstract
Multimodal Automated Program Repair (MAPR) extends traditional program repair by requiring models to jointly reason over source code, textual issue descriptions, and visual artifacts such as GUI screenshots. While recent LLM-based repair systems have shown promising results, existing approaches face several limitations: rigid workflow pipelines restrict exploration during debugging, visual reasoning is often performed over full-page screenshots without localized grounding, and failed repair attempts are rarely transformed into reusable knowledge. To address these challenges, we propose FailureMem, a multimodal repair framework that integrates three key mechanisms: a hybrid workflow-agent architecture that balances structured localization with flexible reasoning, active perception tools that enable region-level visual grounding, and a Failure Memory Bank that converts past repair attempts into reusable guidance. Experiments on SWE-bench Multimodal demonstrate FailureMem improves the resolved rate over GUIRepair by 3.7%.