🤖 AI Summary
This work addresses the challenge of disentangling perceptual reasoning from linguistic memorization in large vision-language models (VLMs), which often produce human-aligned responses on classic visual illusion images. To this end, the authors propose VI-Probe, a novel evaluation framework that leverages controlled illusion perturbations paired with matched non-illusion control images to quantitatively separate perception from memory for the first time. By integrating graded perturbations, control-group design, and new metrics—including polarity-flip consistency, template rigidity index, and normalized illusion multiplier—the study reveals heterogeneous response mechanisms across VLMs: GPT-5 primarily relies on memorized knowledge, Claude-Opus-4.1 exhibits competition between perception and memory, and Qwen-series models are constrained by limited visual processing capacity. These findings demonstrate that VLM behaviors are driven by a confluence of multiple factors rather than a single cognitive mechanism.
📝 Abstract
Large Vision-Language Models (VLMs) often answer classic visual illusions"correctly"on original images, yet persist with the same responses when illusion factors are inverted, even though the visual change is obvious to humans. This raises a fundamental question: do VLMs perceive visual changes or merely recall memorized patterns? While several studies have noted this phenomenon, the underlying causes remain unclear. To move from observations to systematic understanding, this paper introduces VI-Probe, a controllable visual-illusion framework with graded perturbations and matched visual controls (without illusion inducer) that disentangles visually grounded perception from language-driven recall. Unlike prior work that focuses on averaged accuracy, we measure stability and sensitivity using Polarity-Flip Consistency, Template Fixation Index, and an illusion multiplier normalized against matched controls. Experiments across different families reveal that response persistence arises from heterogeneous causes rather than a single mechanism. For instance, GPT-5 exhibits memory override, Claude-Opus-4.1 shows perception-memory competition, while Qwen variants suggest visual-processing limits. Our findings challenge single-cause views and motivate probing-based evaluation that measures both knowledge and sensitivity to controlled visual change. Data and code are available at https://sites.google.com/view/vi-probe/.