Can VLMs Reason Robustly? A Neuro-Symbolic Investigation

πŸ“… 2026-03-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited robustness of current vision-language models (VLMs) under covariate shiftβ€”where perceptual input distributions change while reasoning rules remain invariant. To overcome this, the authors propose VLC, a neuro-symbolic approach that explicitly decouples and coordinates perception and reasoning by compiling task rules into symbolic circuits and executing precise symbolic inference over object concepts extracted by a VLM. This framework ensures both interpretability and verifiability. Evaluated on three visual deductive reasoning benchmarks with distinct rule sets, VLC consistently achieves high accuracy and strong robustness under covariate shift, substantially outperforming end-to-end fine-tuned baselines and existing neuro-symbolic methods.

Technology Category

Application Category

πŸ“ Abstract
Vision-Language Models (VLMs) have been applied to a wide range of reasoning tasks, yet it remains unclear whether they can reason robustly under distribution shifts. In this paper, we study covariate shifts in which the perceptual input distribution changes while the underlying prediction rules do not. To investigate this question, we consider visual deductive reasoning tasks, where a model is required to answer a query given an image and logical rules defined over the object concepts in the image. Empirically, we find that VLMs fine-tuned through gradient-based end-to-end training can achieve high in-distribution accuracy but fail to generalize under such shifts, suggesting that fine-tuning does not reliably induce the underlying reasoning function. This motivates a neuro-symbolic perspective that decouples perception from reasoning. However, we further observe that recent neuro-symbolic approaches that rely on black-box components for reasoning can still exhibit inconsistent robustness across tasks. To address this issue, we propose VLC, a neuro-symbolic method that combines VLM-based concept recognition with circuit-based symbolic reasoning. In particular, task rules are compiled into a symbolic program, specifically a circuit, which executes the rules exactly over the object concepts recognized by the VLM. Experiments on three visual deductive reasoning tasks with distinct rule sets show that VLC consistently achieves strong performance under covariate shifts, highlighting its ability to support robust reasoning.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
Robust Reasoning
Distribution Shift
Covariate Shift
Visual Deductive Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

neuro-symbolic reasoning
vision-language models
covariate shift
symbolic circuits
visual deductive reasoning
πŸ”Ž Similar Papers
No similar papers found.