MetaToken: Detecting Hallucination in Image Descriptions by Meta Classification

๐Ÿ“… 2024-05-29
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 2
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large Vision-Language Models (LVLMs) frequently generate hallucinated image descriptions exhibiting visual-textual inconsistency, undermining model reliability. Existing hallucination detection methods rely on computationally expensive large discriminative models and operate only at the sentence or clause level, resulting in low efficiency. This paper proposes MetaToken, a lightweight token-level binary classifier, whichโ€” for the first timeโ€”uncovers the statistical origins of LVLM hallucinations and establishes a novel, fine-grained hallucination detection paradigm that is tuning-free, zero-shot, and cross-model generalizable. Built upon meta-statistical modeling and a compact neural architecture, MetaToken requires neither ground-truth labels nor additional inference overhead. Evaluated across four state-of-the-art LVLMs, it significantly outperforms existing sentence-level approaches, supports plug-and-play integration with arbitrary open-source LVLMs, and provides a new pathway toward efficient and trustworthy multimodal generation.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Vision Language Models (LVLMs) have shown remarkable capabilities in multimodal tasks like visual question answering or image captioning. However, inconsistencies between the visual information and the generated text, a phenomenon referred to as hallucinations, remain an unsolved problem with regard to the trustworthiness of LVLMs. To address this problem, recent works proposed to incorporate computationally costly Large (Vision) Language Models in order to detect hallucinations on a sentence- or subsentence-level. In this work, we introduce MetaToken, a lightweight binary classifier to detect hallucinations on the token-level at negligible cost. Based on a statistical analysis, we reveal key factors of hallucinations in LVLMs which have been overseen in previous works. MetaToken can be applied to any open-source LVLM without any knowledge about ground truth data providing a reliable detection of hallucinations. We evaluate our method on four state-of-the-art LVLMs demonstrating the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Detects hallucinations in LVLM-generated image descriptions
Identifies token-level inconsistencies in vision-language models
Provides lightweight solution without ground truth data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight binary classifier for token-level hallucination detection
Statistical analysis reveals key hallucination factors in LVLMs
Applicable to any open-source LVLM without ground truth data
๐Ÿ”Ž Similar Papers
No similar papers found.