🤖 AI Summary
This study addresses the unclear robustness of current vision-language models (VLMs) under realistic image distortions, despite their strong performance on standard benchmarks. The authors construct a comprehensive evaluation benchmark encompassing 49 perturbation types across 133 distortion settings, systematically assessing four prominent VLM families on MMBench and MMMU-Pro at multiple severity levels. They reveal, for the first time, that VLMs exhibit surprising sensitivity to low-intensity spatial perturbations—such as glass blur, resampling artifacts, and geometric distortions—with performance drops of up to 34 percentage points, challenging the common intuition that visual severity directly correlates with task difficulty. The findings highlight a critical gap: while contemporary VLMs demonstrate strong semantic understanding, they remain notably weak in spatial robustness, underscoring the need for enhanced modeling of geometric and resampling invariance.
📝 Abstract
Vision-language models (VLMs) achieve strong performance on standard, high-quality datasets, but we still do not fully understand how they perform under real-world image distortions. We present VLM-RobustBench, a benchmark spanning 49 augmentation types across noise, blur, weather, digital, and geometric perturbations, evaluated under graded severities (low/mid/high) and binary transforms, yielding 133 corrupted settings. We evaluate VLMs from four families (Qwen, InternVL, Molmo, Gemma) on two complementary benchmarks: MMBench (visually grounded) and MMMU-Pro (reasoning-oriented). Our results reveal that visual severity is a weak predictor of difficulty: low-severity spatial perturbations often degrade performance more than visually severe photometric corruptions. In particular, low-severity glass_blur reduces MMBench accuracy by about 8 pp on average across models, while the largest drops arise from resampling and geometric distortions (e.g., upsample, elastic_transform), reaching up to 34 pp. Overall, our findings suggest current VLMs are semantically strong but spatially fragile, motivating the definition of novel robustness evaluation protocols and training regimes that emphasize resampling and geometric invariances.