🤖 AI Summary
Large vision-language models (LVLMs) frequently exhibit “object hallucination”—generating spurious object descriptions inconsistent with image content—during multimodal vision-language tasks.
Method: We propose CAOS, the first context-aware, cross-domain generalizable framework for evaluating object hallucination. CAOS jointly leverages image context, generated text, and object-level statistics, integrating semantic relation modeling with object co-occurrence and frequency statistics, and further employs sequential word embedding analysis to uncover underlying hallucination mechanisms.
Results: Experiments demonstrate that CAOS significantly improves out-of-distribution object hallucination detection. It is the first to empirically reveal a strong correlation between object generation order and hallucination severity. Moreover, CAOS provides fine-grained, interpretable hallucination attribution—pinpointing not only *whether* but *why* and *where* hallucinations occur—enabling principled diagnosis and mitigation of LVLM failures.
📝 Abstract
Despite their impressive performance on multi-modal tasks, large vision-language models (LVLMs) tend to suffer from hallucinations. An important type is object hallucination, where LVLMs generate objects that are inconsistent with the images shown to the model. Existing works typically attempt to quantify object hallucinations by detecting and measuring the fraction of hallucinated objects in generated captions. Additionally, more recent work also measures object hallucinations by directly querying the LVLM with binary questions about the presence of likely hallucinated objects based on object statistics like top-k frequent objects and top-k co-occurring objects. In this paper, we present Context-Aware Object Similarities (CAOS), a novel approach for evaluating object hallucination in LVLMs using object statistics as well as the generated captions. CAOS uniquely integrates object statistics with semantic relationships between objects in captions and ground-truth data. Moreover, existing approaches usually only detect and measure hallucinations belonging to a predetermined set of in-domain objects (typically the set of all ground-truth objects for the training dataset) and ignore generated objects that are not part of this set, leading to under-evaluation. To address this, we further employ language model--based object recognition to detect potentially out-of-domain hallucinated objects and use an ensemble of LVLMs for verifying the presence of such objects in the query image. CAOS also examines the sequential dynamics of object generation, shedding light on how the order of object appearance influences hallucinations, and employs word embedding models to analyze the semantic reasons behind hallucinations. CAOS aims to offer a nuanced understanding of the hallucination tendencies of LVLMs by providing a systematic framework to identify and interpret object hallucinations.