🤖 AI Summary
Current vision-language models (VLMs) exhibit insufficient robustness on visual mathematical reasoning tasks, particularly under minor perturbations to numerical values or visual elements; moreover, existing benchmarks are static and unidimensional, limiting rigorous evaluation of generalization stability.
Method: We introduce DynaMath—the first dynamic visual mathematical benchmark—built from 501 programmable seed problems. It employs symbolic abstraction of visual parameters and procedural variant synthesis to automatically generate over 10,000 multi-dimensional, controllably perturbed instances. We further propose a novel worst-case robustness definition and quantification framework.
Contribution/Results: Our systematic evaluation of 14 state-of-the-art VLMs across 5,010 instances reveals an average 32.7% drop in worst-case accuracy relative to average-case performance, exposing critical model fragility. The open-source, reproducible, and extensible toolchain establishes a new paradigm for robustness research in VLM-based mathematical reasoning.
📝 Abstract
The rapid advancements in Vision-Language Models (VLMs) have shown great potential in tackling mathematical reasoning tasks that involve visual context. Unlike humans who can reliably apply solution steps to similar problems with minor modifications, we found that SOTA VLMs like GPT-4o can consistently fail in these scenarios, revealing limitations in their mathematical reasoning capabilities. In this paper, we investigate the mathematical reasoning robustness in VLMs and evaluate how well these models perform under different variants of the same question, such as changes in visual numerical values or function graphs. While several vision-based math benchmarks have been developed to assess VLMs' problem-solving capabilities, these benchmarks contain only static sets of problems and cannot easily evaluate mathematical reasoning robustness. To fill this gap, we introduce DynaMath, a dynamic visual math benchmark designed for in-depth assessment of VLMs. DynaMath includes 501 high-quality, multi-topic seed questions, each represented as a Python program. Those programs are carefully designed and annotated to enable the automatic generation of a much larger set of concrete questions, including many different types of visual and textual variations. DynaMath allows us to evaluate the generalization ability of VLMs, by assessing their performance under varying input conditions of a seed question. We evaluated 14 SOTA VLMs with 5,010 generated concrete questions. Our results show that the worst-case model accuracy, defined as the percentage of correctly answered seed questions in all 10 variants, is significantly lower than the average-case accuracy. Our analysis emphasizes the need to study the robustness of VLMs' reasoning abilities, and DynaMath provides valuable insights to guide the development of more reliable models for mathematical reasoning.