🤖 AI Summary
Large Vision-Language Models (LVLMs) suffer from object misidentification and hallucination when confronted with non-coherent scenes featuring spatial anomalies or missing expected objects. To address this, we introduce ORIC, the first benchmark systematically evaluating LVLM robustness to context-object relational inconsistencies. ORIC innovatively employs a dual-path sampling strategy—combining LLM-guided semantic perturbation with CLIP-based visual alignment—to generate image-text pairs that are semantically inconsistent yet visually plausible. Evaluations across 18 LVLMs and 2 open-vocabulary detection models reveal substantial performance degradation under relational anomalies, with average accuracy dropping by 32.7%, exposing critical deficits in contextual reasoning and situational awareness. ORIC provides a reproducible, scalable benchmark and analytical framework for diagnosing and advancing context-aware inference capabilities in LVLMs.
📝 Abstract
Large Vision-Language Models (LVLMs) have made significant strides in image caption, visual question answering, and robotics by integrating visual and textual information. However, they remain prone to errors in incongruous contexts, where objects appear unexpectedly or are absent when contextually expected. This leads to two key recognition failures: object misidentification and hallucination. To systematically examine this issue, we introduce the Object Recognition in Incongruous Context Benchmark (ORIC), a novel benchmark that evaluates LVLMs in scenarios where object-context relationships deviate from expectations. ORIC employs two key strategies: (1) LLM-guided sampling, which identifies objects that are present but contextually incongruous, and (2) CLIP-guided sampling, which detects plausible yet nonexistent objects that are likely to be hallucinated, thereby creating an incongruous context. Evaluating 18 LVLMs and two open-vocabulary detection models, our results reveal significant recognition gaps, underscoring the challenges posed by contextual incongruity. This work provides critical insights into LVLMs' limitations and encourages further research on context-aware object recognition.