🤖 AI Summary
The mathematical reasoning capabilities of large multimodal models (LMMs) under image-based answer options remain systematically unassessed, despite their critical importance for fine-grained multimodal understanding. Method: We introduce VisioMath—the first benchmark specifically designed for mathematical reasoning with visual answer choices—comprising 1,800 multiple-choice questions and 8,070 answer images. It emphasizes fine-grained mathematical discrimination among visually similar options and features a joint evaluation framework integrating multi-image comprehension and mathematical logical reasoning, supported by rigorous human verification and diverse question types. Contribution/Results: Experiments reveal that the state-of-the-art LMM GPT-4o achieves only 45.9% accuracy—significantly lower than its performance on text-based counterparts—providing the first empirical evidence of a fundamental limitation in current LMMs. VisioMath thus establishes a new, challenging frontier for multimodal reasoning evaluation.
📝 Abstract
Large Multimodal Models (LMMs) have demonstrated remarkable problem-solving capabilities across various domains. However, their ability to perform mathematical reasoning when answer options are represented as images--an essential aspect of multi-image comprehension--remains underexplored. To bridge this gap, we introduce VisioMath, a benchmark designed to evaluate mathematical reasoning in multimodal contexts involving image-based answer choices. VisioMath comprises 8,070 images and 1,800 multiple-choice questions, where each answer option is an image, presenting unique challenges to existing LMMs. To the best of our knowledge, VisioMath is the first dataset specifically tailored for mathematical reasoning in image-based-option scenarios, where fine-grained distinctions between answer choices are critical for accurate problem-solving. We systematically evaluate state-of-the-art LMMs on VisioMath and find that even the most advanced models struggle with this task. Notably, GPT-4o achieves only 45.9% accuracy, underscoring the limitations of current models in reasoning over visually similar answer choices. By addressing a crucial gap in existing benchmarks, VisioMath establishes a rigorous testbed for future research, driving advancements in multimodal reasoning.