🤖 AI Summary
This work addresses the significant performance gap between vision-language models and text-only models in solving mathematical problems presented as images, a limitation rooted in challenges of formula recognition, layout understanding, and multimodal context parsing. To bridge this modality gap, we propose VisTIRA, a framework that integrates structured tools to iteratively decompose image-based math problems into natural language reasoning steps and executable Python code, enabling end-to-end visual mathematical reasoning. We establish the first benchmark and training paradigm for this task, leveraging synthetic LaTeX-rendered images and real-world tool-use traces from student homework. Our analysis reveals that the modality gap narrows with increasing model scale and demonstrates the complementary roles of OCR localization and structured reasoning. Experiments show that tool-integrated supervision substantially boosts performance—particularly benefiting smaller models—and achieves state-of-the-art results on datasets such as SnapAsk.
📝 Abstract
Vision-language models (VLMs) lag behind text-only language models on mathematical reasoning when the same problems are presented as images rather than text. We empirically characterize this as a modality gap: the same question in text form yields markedly higher accuracy than its visually typeset counterpart, due to compounded failures in reading dense formulas, layout, and mixed symbolic-diagrammatic context. First, we introduce VisTIRA (Vision and Tool-Integrated Reasoning Agent), a tool-integrated reasoning framework that enables structured problem solving by iteratively decomposing a given math problem (as an image) into natural language rationales and executable Python steps to determine the final answer. Second, we build a framework to measure and improve visual math reasoning: a LaTeX-based pipeline that converts chain-of-thought math corpora (e.g., NuminaMath) into challenging image counterparts, and a large set of synthetic tool-use trajectories derived from a real-world, homework-style image dataset (called SnapAsk) for fine-tuning VLMs. Our experiments show that tool-integrated supervision improves image-based reasoning, and OCR grounding can further narrow the gap for smaller models, although its benefit diminishes at scale. These findings highlight that modality gap severity inversely correlates with model size, and that structured reasoning and OCR-based grounding are complementary strategies for advancing visual mathematical reasoning.