🤖 AI Summary
Despite strong downstream performance, it remains unclear whether current vision-language models (VLMs) possess genuine visual reasoning capabilities or merely exploit linguistic priors. Method: We introduce VisRes Bench—a novel benchmark that systematically decouples three hierarchical visual reasoning abilities: perceptual completion, single-attribute rule inference, and multi-attribute compositional reasoning. To isolate visual reasoning from language bias, we eliminate textual context and introduce controlled image perturbations (e.g., blur, occlusion, rotation), yielding over 19,000 structured zero-shot test samples. Contribution/Results: Experiments reveal that state-of-the-art VLMs suffer drastic performance degradation—approaching chance level—under subtle visual perturbations, exposing their reliance on superficial pattern matching rather than abstract visual reasoning. VisRes Bench establishes the first systematic, language-debiased evaluation framework for assessing true visual reasoning in VLMs.
📝 Abstract
Vision-Language Models (VLMs) have achieved remarkable progress across tasks such as visual question answering and image captioning. Yet, the extent to which these models perform visual reasoning as opposed to relying on linguistic priors remains unclear. To address this, we introduce VisRes Bench, a benchmark designed to study visual reasoning in naturalistic settings without contextual language supervision. Analyzing model behavior across three levels of complexity, we uncover clear limitations in perceptual and relational visual reasoning capacities. VisRes isolates distinct reasoning abilities across its levels. Level 1 probes perceptual completion and global image matching under perturbations such as blur, texture changes, occlusion, and rotation; Level 2 tests rule-based inference over a single attribute (e.g., color, count, orientation); and Level 3 targets compositional reasoning that requires integrating multiple visual attributes. Across more than 19,000 controlled task images, we find that state-of-the-art VLMs perform near random under subtle perceptual perturbations, revealing limited abstraction beyond pattern recognition. We conclude by discussing how VisRes provides a unified framework for advancing abstract visual reasoning in multimodal research.