Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback

📅 2024-04-22
🏛️ arXiv.org
📈 Citations: 11
Influential: 2
📄 PDF
🤖 AI Summary
To address hallucination in large vision-language models (LVLMs) during image captioning—caused by image-text misalignment—this paper proposes a fine-grained, AI-feedback-driven detection and mitigation framework. Methodologically, it introduces the first sentence-level, multi-type hallucination detector identifying object-, attribute-, and relation-level inconsistencies; designs hallucination-severity-aware direct preference optimization (HSA-DPO) to close a detection–rewriting–preference-learning loop; and operates entirely without human annotations or reliance on black-box foundation models. The key contribution is a lightweight, transferable end-to-end solution that significantly improves LVLM reliability across multiple benchmarks: hallucination detection F1 score increases by 12.6%, and image-text alignment of generated captions improves by 23.4%. This work establishes a novel paradigm for enhancing LVLM robustness and factual consistency.

Technology Category

Application Category

📝 Abstract
The rapidly developing Large Vision Language Models (LVLMs) have shown notable capabilities on a range of multi-modal tasks, but still face the hallucination phenomena where the generated texts do not align with the given contexts, significantly restricting the usages of LVLMs. Most previous work detects and mitigates hallucination at the coarse-grained level or requires expensive annotation (e.g., labeling by proprietary models or human experts). To address these issues, we propose detecting and mitigating hallucinations in LVLMs via fine-grained AI feedback. The basic idea is that we generate a small-size sentence-level hallucination annotation dataset by proprietary models, whereby we train a hallucination detection model which can perform sentence-level hallucination detection, covering primary hallucination types (i.e., object, attribute, and relationship). Then, we propose a detect-then-rewrite pipeline to automatically construct preference dataset for training hallucination mitigating model. Furthermore, we propose differentiating the severity of hallucinations, and introducing a Hallucination Severity-Aware Direct Preference Optimization (HSA-DPO) for mitigating hallucination in LVLMs by incorporating the severity of hallucinations into preference learning. Extensive experiments demonstrate the effectiveness of our method.
Problem

Research questions and friction points this paper is trying to address.

Large Visual Language Models
Misalignment Errors
Reliability in Practical Applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

LVLMs Error Mitigation
Automatic Error Correction
Severity-based Optimization
🔎 Similar Papers
No similar papers found.
Wenyi Xiao
Wenyi Xiao
Zhejiang University
Ziwei Huang
Ziwei Huang
Zhejiang University
Multimodal LLMsAIGC
Leilei Gan
Leilei Gan
Zhejiang University
NLPLLMsMultimodal LLMsAI+X
Wanggui He
Wanggui He
Researcher, Alibaba Group
ai
H
Haoyuan Li
Alibaba Group
Z
Zhelun Yu
Alibaba Group
H
Hao Jiang
Alibaba Group
F
Fei Wu
L
Linchao Zhu
Zhejiang University