Better Safe Than Sorry? Overreaction Problem of Vision Language Models in Visual Emergency Recognition

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a systematic “overreaction” bias in vision-language models (VLMs) under safety-critical scenarios: while achieving 70–100% accuracy on genuine emergencies, VLMs exhibit alarmingly high false-positive rates (31–96%) on safe contexts—consistently misclassifying all 10 safe scenario types across 14 large-parameter VLMs (2B–124B). Method: We introduce VERI, the first contrastive diagnostic benchmark (200 images, 100 pairs), rigorously constructed via multi-stage human annotation to isolate visually similar yet semantically opposite emergency/safe samples. We propose a two-stage evaluation protocol—risk identification followed by emergency response—and conduct error attribution analysis. Results: We find that 88–93% of false positives stem from excessive contextual overinterpretation, and scaling model size fails to mitigate this bias. VERI establishes a new paradigm for evaluating and calibrating VLM reliability, providing empirically grounded insights for trustworthy multimodal AI deployment.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have demonstrated impressive capabilities in understanding visual content, but their reliability in safety-critical contexts remains under-explored. We introduce VERI (Visual Emergency Recognition Dataset), a carefully designed diagnostic benchmark of 200 images (100 contrastive pairs). Each emergency scene is matched with a visually similar but safe counterpart through multi-stage human verification and iterative refinement. Using a two-stage protocol - risk identification and emergency response - we evaluate 14 VLMs (2B-124B parameters) across medical emergencies, accidents, and natural disasters. Our analysis reveals a systematic overreaction problem: models excel at identifying real emergencies (70-100 percent success rate) but suffer from an alarming rate of false alarms, misidentifying 31-96 percent of safe situations as dangerous, with 10 scenarios failed by all models regardless of scale. This"better-safe-than-sorry"bias manifests primarily through contextual overinterpretation (88-93 percent of errors), challenging VLMs' reliability for safety applications. These findings highlight persistent limitations that are not resolved by increasing model scale, motivating targeted approaches for improving contextual safety assessment in visually misleading scenarios.
Problem

Research questions and friction points this paper is trying to address.

VLMs overreact by misidentifying safe scenes as emergencies
High false alarm rates in safety-critical visual recognition
Contextual overinterpretation causes unreliable emergency assessments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces VERI dataset for emergency recognition
Evaluates VLMs with two-stage risk protocol
Identifies contextual overinterpretation as main error
🔎 Similar Papers
No similar papers found.