🤖 AI Summary
This work addresses the limited reliability of vision-language models (VLMs) in non-linguistic visual reasoning tasks by introducing VRIQ, a benchmark that encompasses both abstract puzzles and natural-image tasks and, for the first time, quantitatively disentangles the contributions of perception and reasoning. Through carefully designed fine-grained diagnostic probes, the study systematically evaluates model weaknesses across perceptual dimensions such as shape, quantity, spatial position, and 3D/depth understanding. Experimental results reveal that VLMs achieve only 28% accuracy on abstract tasks and 45% on natural-image tasks. Notably, 56% of failures stem purely from perceptual deficits, 43% arise from joint perception-reasoning failures, and merely 1% are attributable to pure reasoning errors, underscoring perception as the primary bottleneck in current VLM performance.
📝 Abstract
Recent progress in Vision Language Models (VLMs) has raised the question of whether they can reliably perform nonverbal reasoning. To this end, we introduce VRIQ (Visual Reasoning IQ), a novel benchmark designed to assess and analyze the visual reasoning ability of VLMs. We evaluate models on two sets of tasks: abstract puzzle-style and natural-image reasoning tasks. We find that on abstract puzzles, performance remains near random with an average accuracy of around 28%, while natural tasks yield better but still weak results with 45% accuracy. We also find that tool-augmented reasoning demonstrates only modest improvements. To uncover the source of this weakness, we introduce diagnostic probes targeting perception and reasoning. Our analysis demonstrates that around 56% of failures arise from perception alone, 43% from both perception and reasoning, and only a mere 1% from reasoning alone. This motivates us to design fine-grained diagnostic probe questions targeting specific perception categories (e.g., shape, count, position, 3D/depth), revealing that certain categories cause more failures than others. Our benchmark and analysis establish that current VLMs, even with visual reasoning tools, remain unreliable abstract reasoners, mostly due to perception limitations, and offer a principled basis for improving visual reasoning in multimodal systems.