Evaluating Robustness of Vision-Language Models Under Noisy Conditions

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the robustness of vision-language models (VLMs) under realistic corruptions—including illumination variation, motion blur, and JPEG compression—across image captioning and visual question answering tasks. We propose the first controllable, multimodal robustness evaluation framework, integrating lexical metrics (BLEU, METEOR, ROUGE, CIDEr) with neural semantic similarity computed via Sentence-BERT. Experiments span multiple public benchmarks and enable cross-model comparison. Key findings are: (1) Caption granularity strongly correlates with robustness—more detailed descriptions degrade more severely under corruption; (2) Larger VLMs (e.g., LLaVA) exhibit superior semantic understanding but lack universal robustness advantages under noise; (3) Motion blur and JPEG compression inflict the most severe performance degradation. Our work establishes a reproducible benchmark and introduces novel analytical dimensions for multimodal robustness research.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have attained exceptional success across multimodal tasks such as image captioning and visual question answering. However, their robustness under noisy conditions remains unfamiliar. In this study, we present a comprehensive evaluation framework to evaluate the performance of several state-of-the-art VLMs under controlled perturbations, including lighting variation, motion blur, and compression artifacts. We used both lexical-based metrics (BLEU, METEOR, ROUGE, CIDEr) and neural-based similarity measures using sentence embeddings to quantify semantic alignment. Our experiments span diverse datasets, revealing key insights: (1) descriptiveness of ground-truth captions significantly influences model performance; (2) larger models like LLaVA excel in semantic understanding but do not universally outperform smaller models; and (3) certain noise types, such as JPEG compression and motion blur, dramatically degrade performance across models. Our findings highlight the nuanced trade-offs between model size, dataset characteristics, and noise resilience, offering a standardized benchmark for future robust multimodal learning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating VLM robustness under noisy conditions
Assessing performance with controlled noise perturbations
Quantifying semantic alignment using diverse metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive evaluation framework for VLMs
Lexical and neural metrics for semantic alignment
Standardized benchmark for robust multimodal learning
🔎 Similar Papers
No similar papers found.