🤖 AI Summary
This study systematically evaluates the gap between multimodal large language models (MLLMs) and human cognitive capabilities in abstract visual reasoning (AVR), particularly on matrix reasoning tasks requiring visual working memory and cross-image pattern extrapolation.
Method: We introduce MaRs-VQA, a novel dataset, and VCog-Bench—a comprehensive AVR benchmark integrating three major AVR evaluation suites—designed upon Raven’s Progressive Matrices and WISC paradigms. We propose a zero-shot evaluation framework and publicly release both the benchmark and evaluation pipeline.
Contribution/Results: Experiments reveal that state-of-the-art open- and closed-source MLLMs significantly underperform even children and adults on AVR tasks, quantifying for the first time the “visual cognition gap.” This work establishes the first reproducible, high-fidelity AVR benchmark and provides a systematic analysis to advance embodied visual intelligence.
📝 Abstract
Recently, Multimodal Large Language Models (MLLMs) have shown great promise in language-guided perceptual tasks such as recognition, segmentation, and object detection. However, their effectiveness in addressing visual cognition problems that require high-level reasoning is not well-established. One such challenge is abstract visual reasoning (AVR) -- the cognitive ability to discern relationships among patterns in a set of images and extrapolate to predict subsequent patterns. This skill is crucial during the early neurodevelopmental stages of children. Inspired by the AVR tasks in Raven's Progressive Matrices (RPM) and Wechsler Intelligence Scale for Children (WISC), we propose a new dataset MaRs-VQA and a new benchmark VCog-Bench containing three datasets to evaluate the zero-shot AVR capability of MLLMs and compare their performance with existing human intelligent investigation. Our comparative experiments with different open-source and closed-source MLLMs on the VCog-Bench revealed a gap between MLLMs and human intelligence, highlighting the visual cognitive limitations of current MLLMs. We believe that the public release of VCog-Bench, consisting of MaRs-VQA, and the inference pipeline will drive progress toward the next generation of MLLMs with human-like visual cognition abilities.