🤖 AI Summary
This work addresses the challenge that existing large language models face in repairing multimodal bugs involving GUI screenshots, where converting images to text often discards critical spatial relationships, leading to inaccurate alignment between visual elements and code. To overcome this limitation, the authors propose GALA, a novel framework that introduces an explicit structured reasoning mechanism. GALA establishes cross-modal semantic and relational consistency by constructing multi-level alignments—ranging from file-level to function-level—between an image-derived UI graph and a code dependency graph, thereby surpassing conventional keyword-matching approaches. The method integrates repository-wide file reference analysis, function call graph reasoning, and alignment-guided context for patch generation. Evaluated on the SWE-bench Multimodal benchmark, GALA achieves state-of-the-art performance, significantly improving the accuracy of multimodal bug localization and repair.
📝 Abstract
Large Language Model (LLM)-based Automated Program Repair (APR) has shown strong potential on textual benchmarks, yet struggles in multimodal scenarios where bugs are reported with GUI screenshots. Existing methods typically convert images into plain text, which discards critical spatial relationships and causes a severe disconnect between visual observations and code components, leading localization to degrade into imprecise keyword matching. To bridge this gap, we propose GALA (Graph Alignment for Localization in APR), a framework that shifts multimodal APR from implicit semantic guessing to explicit structural reasoning. GALA operates in four stages: it first constructs an Image UI Graph to capture visual elements and their structural relationships; then performs file-level alignment by cross-referencing this UI graph with repository-level structures (e.g., file references) to locate candidate files; next conducts function-level alignment by reasoning over fine-grained code dependencies (e.g., call graphs) to precisely ground visual elements to corresponding code components; and finally performs patch generation within the grounded code context based on the aligned files and functions. By systematically enforcing both semantic and relational consistency across modalities, GALA establishes a highly accurate visual-to-code mapping. Evaluations on the SWE-bench Multimodal benchmark demonstrate that GALA achieves state-of-the-art performance, highlighting the effectiveness of hierarchical structural alignment.