🤖 AI Summary
Current vision-language models (VLMs) lack systematic evaluation on complex graphical reasoning—encompassing spatial, relational, and abstract reasoning. To address this gap, we introduce ReasonBench, the first dedicated benchmark for graphical reasoning, comprising 1,613 authentic intelligence test items. We propose a dual-optimization framework: (1) DiaCoT, a hierarchical dialogue-based chain-of-thought method that enhances interpretability and stepwise reasoning; and (2) ReasonTune, a task-adaptive fine-tuning strategy that strengthens abstract structural modeling. Evaluated across 11 state-of-the-art VLMs, our approach achieves a 33.5% absolute accuracy improvement, revealing critical bottlenecks in structured graphical understanding. ReasonBench establishes a standardized, challenging evaluation platform and advances VLMs toward higher-order cognitive reasoning capabilities.
📝 Abstract
Evaluating the performance of visual language models (VLMs) in graphic reasoning tasks has become an important research topic. However, VLMs still show obvious deficiencies in simulating human-level graphic reasoning capabilities, especially in complex graphic reasoning and abstract problem solving, which are less studied and existing studies only focus on simple graphics. To evaluate the performance of VLMs in complex graphic reasoning, we propose ReasonBench, the first evaluation benchmark focused on structured graphic reasoning tasks, which includes 1,613 questions from real-world intelligence tests. ReasonBench covers reasoning dimensions related to location, attribute, quantity, and multi-element tasks, providing a comprehensive evaluation of the performance of VLMs in spatial, relational, and abstract reasoning capabilities. We benchmark 11 mainstream VLMs (including closed-source and open-source models) and reveal significant limitations of current models. Based on these findings, we propose a dual optimization strategy: Diagrammatic Reasoning Chain (DiaCoT) enhances the interpretability of reasoning by decomposing layers, and ReasonTune enhances the task adaptability of model reasoning through training, all of which improves VLM performance by 33.5%. All experimental data and code are in the repository: https://huggingface.co/datasets/cistine/ReasonBench.