ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding

📅 2025-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack stepwise visual attention capabilities for understanding structured images such as tables and charts. Method: This paper introduces Visual Chain-of-Thought (VCoT), a novel paradigm that models executable visual edits—e.g., bounding-box selection, highlighting, and masking—as programmable intermediate reasoning steps. VCoT enables multimodal LMs (e.g., GPT-4o) to autonomously generate and execute Python-based image processing code, dynamically reconstructing inputs to enhance structured perception—without external knowledge and in an end-to-end trainable manner. Contribution/Results: VCoT achieves +11.0% and +6.8% improvements on table and chart understanding benchmarks, respectively. A newly curated 14K-sample VCoT dataset outperforms QA supervision and textual chain-of-thought supervision by +8.0% and +2.6%, respectively. This work establishes the first explainable, executable, multi-hop visual reasoning framework driven by grounded visual editing operations.

Technology Category

Application Category

📝 Abstract
Structured image understanding, such as interpreting tables and charts, requires strategically refocusing across various structures and texts within an image, forming a reasoning sequence to arrive at the final answer. However, current multimodal large language models (LLMs) lack this multihop selective attention capability. In this work, we introduce ReFocus, a simple yet effective framework that equips multimodal LLMs with the ability to generate"visual thoughts"by performing visual editing on the input image through code, shifting and refining their visual focuses. Specifically, ReFocus enables multimodal LLMs to generate Python codes to call tools and modify the input image, sequentially drawing boxes, highlighting sections, and masking out areas, thereby enhancing the visual reasoning process. We experiment upon a wide range of structured image understanding tasks involving tables and charts. ReFocus largely improves performance on all tasks over GPT-4o without visual editing, yielding an average gain of 11.0% on table tasks and 6.8% on chart tasks. We present an in-depth analysis of the effects of different visual edits, and reasons why ReFocus can improve the performance without introducing additional information. Further, we collect a 14k training set using ReFocus, and prove that such visual chain-of-thought with intermediate information offers a better supervision than standard VQA data, reaching a 8.0% average gain over the same model trained with QA pairs and 2.6% over CoT.
Problem

Research questions and friction points this paper is trying to address.

Complex Image Understanding
Attention Transfer
Step-by-step Thinking
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReFocus
Attention Modulation
Image Editing for AI
🔎 Similar Papers
No similar papers found.