🤖 AI Summary
This work exposes critical fragility in vision-language model (VLM) evaluation benchmarks—such as BLINK—where model performance is highly sensitive to non-semantic visual artifacts (e.g., marker color, size, JPEG compression level), leading to distorted evaluations and even reversed model rankings. To address this, we introduce VPBench, the first robustness-enhanced, multi-variant visual prompting benchmark. VPBench systematically constructs 16 semantically equivalent yet perceptually diverse visual marker variants and releases an open-source toolchain for reproducible evaluation. Through controlled experiments across nine state-of-the-art VLMs, we demonstrate that VPBench substantially improves assessment stability: for instance, InternVL3-8B surpasses Gemini 2.5 Pro after optimizing marker size—a reversal unobserved under conventional benchmarks. This study provides the first systematic quantification of non-semantic interference in VLM evaluation, establishing both theoretical foundations and practical methodologies for building trustworthy vision-language assessment frameworks.
📝 Abstract
A key challenge in evaluating VLMs is testing models' ability to analyze visual content independently from their textual priors. Recent benchmarks such as BLINK probe visual perception through visual prompting, where questions about visual content are paired with coordinates to which the question refers, with the coordinates explicitly marked in the image itself. While these benchmarks are an important part of VLM evaluation, we find that existing models are surprisingly fragile to seemingly irrelevant details of visual prompting: simply changing a visual marker from red to blue can completely change rankings among models on a leaderboard. By evaluating nine commonly-used open- and closed-source VLMs on two visually prompted tasks, we demonstrate how details in benchmark setup, including visual marker design and dataset size, have a significant influence on model performance and leaderboard rankings. These effects can even be exploited to lift weaker models above stronger ones; for instance, slightly increasing the size of the visual marker results in open-source InternVL3-8B ranking alongside or better than much larger proprietary models like Gemini 2.5 Pro. We further show that low-level inference choices that are often ignored in benchmarking, such as JPEG compression levels in API calls, can also cause model lineup changes. These details have substantially larger impacts on visually prompted benchmarks than on conventional semantic VLM evaluations. To mitigate this instability, we curate existing datasets to create VPBench, a larger visually prompted benchmark with 16 visual marker variants. VPBench and additional analysis tools are released at https://lisadunlap.github.io/vpbench/.