Mitigating Object Hallucination in MLLMs via Data-augmented Phrase-level Alignment

📅 2024-05-28
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from object hallucination—generating spurious descriptions of non-existent objects in images. To address this, we propose Data-Augmented Phrase-level Alignment (DPA), the first framework that constructs hallucination-correct response pairs at the phrase level for data augmentation. DPA introduces a plug-and-play loss function that jointly optimizes fine-grained hallucination suppression and preservation of general vision-language capabilities. The method integrates instruction tuning, generative data augmentation, hallucination-aware response pair construction, and phrase-level contrastive learning. On hallucination-oriented VQA benchmarks, DPA achieves a 13.4% F1 improvement; on image captioning, it reduces hallucination rate by 4.2%. Our resulting model, HALVA, maintains original MLLM performance on standard vision-language benchmarks without incurring additional inference overhead.

Technology Category

Application Category

📝 Abstract
Despite their significant advancements, Multimodal Large Language Models (MLLMs) often generate factually inaccurate information, referred to as hallucination. In this work, we address object hallucinations in MLLMs, where information is generated about an object not present in the input image. We introduce Data-augmented Phrase-level Alignment (DPA), a novel loss which can be applied to instruction-tuned off-the-shelf MLLMs to mitigate hallucinations, while preserving their general vision-language capabilities. To fine-tune MLLMs with DPA, we first generate a set of `hallucinated' and `correct' response pairs through generative data augmentation by selectively altering the ground-truth information of the correct responses at a phrase level. The DPA loss is then used to train MLLMs to reduce the likelihood of hallucinated phrases compared to the correct ones. Our thorough evaluation on various benchmarks confirms the effectiveness of DPA in mitigating hallucination while retaining the out-of-the-box performance of the MLLMs on general tasks. For instance, MLLMs finetuned with DPA, which we refer to as Hallucination Attenuated Language and Vision Assistant (HALVA), improve F1 by up to 13.4% on hallucination visual question-answering and reduce the hallucination rate by up to 4.2% on image description tasks.
Problem

Research questions and friction points this paper is trying to address.

Mitigate object hallucination in MLLMs
Introduce Data-augmented Phrase-level Alignment (DPA)
Improve accuracy and reduce hallucination rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-augmented Phrase-level Alignment (DPA) loss
Generative data augmentation for response pairs
Fine-tuning MLLMs to reduce hallucinated phrases
🔎 Similar Papers
No similar papers found.