🤖 AI Summary
This work addresses the limitations of existing evaluation methods that rely on static difficulty labels and simplistic metrics, which fail to capture the adaptive reasoning capabilities of vision-language models in dynamically selecting between tool-augmented visual reasoning and pure textual reasoning. To this end, we propose AdaptMMBench, a multimodal benchmark spanning five domains—real-world scenarios, OCR, GUI, knowledge, and mathematics—that introduces a dynamic difficulty identification mechanism grounded in model capability boundaries. Our framework employs the Matthews Correlation Coefficient (MCC) to assess the appropriateness of reasoning mode selection and enables multidimensional process analysis, including coverage of critical reasoning steps, tool effectiveness, and computational efficiency. Experiments reveal that adaptive capability improves with model scale yet remains decoupled from final accuracy, that critical step coverage positively correlates with performance, and that tool effectiveness varies significantly across model architectures.
📝 Abstract
Adaptive multimodal reasoning has emerged as a promising frontier in Vision-Language Models (VLMs), aiming to dynamically modulate between tool-augmented visual reasoning and text reasoning to enhance both effectiveness and efficiency. However, existing evaluations rely on static difficulty labels and simplistic metrics, which fail to capture the dynamic nature of difficulty relative to varying model capacities. Consequently, they obscure the distinction between adaptive mode selection and general performance while neglecting fine-grained process analyses. In this paper, we propose AdaptMMBench, a comprehensive benchmark for adaptive multimodal reasoning across five domains: real-world, OCR, GUI, knowledge, and math, encompassing both direct perception and complex reasoning tasks. AdaptMMBench utilizes a Matthews Correlation Coefficient (MCC) metric to evaluate the selection rationality of different reasoning modes, isolating this meta-cognition ability by dynamically identifying task difficulties based on models'capability boundaries. Moreover, AdaptMMBench facilitates multi-dimensional process evaluation across key step coverage, tool effectiveness, and computational efficiency. Our evaluation reveals that while adaptive mode selection scales with model capacity, it notably decouples from final accuracy. Conversely, key step coverage aligns with performance, though tool effectiveness remains highly inconsistent across model architectures.