Mitigating Multimodal Hallucinations via Gradient-based Self-Reflection

πŸ“… 2025-09-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Hallucinations in multimodal large language models (MLLMs) primarily stem from text–vision bias (overreliance on textual cues) and co-occurrence bias (statistical correlations among objects in training data). Existing mitigation methods are largely heuristic and fail to model instance-level variations in bias strength. Method: We propose the first gradient-based introspective framework that requires no additional training or external resources. It quantifies the influence of textual versus visual cues via token-level gradient analysis, precisely identifies vision-related bias tokens, and dynamically suppresses them. Furthermore, we design an influence-aware contrastive decoding mechanism to jointly mitigate both biases. Contribution/Results: Our method reduces hallucination rates significantly on LLaVA-QA90, achieving up to a 92% improvement in accuracy. It demonstrates strong cross-model generalizability without architectural modifications or fine-tuning.

Technology Category

Application Category

πŸ“ Abstract
Hallucinations in multimodal large language model are caused by the text-visual bias and the co-occurrence bias. The former reflects an over-reliance on text information in the decision-making process, while the latter arises from the statistical object-pairing patterns abstracted from the training data. Existing mitigation methods heuristically address these biases without understanding the fluctuating bias level across the instances. We first propose estimating the influence of respective token types (visual, prompt, and previous outputs) using a gradient-based self-reflection method. The estimated token influence further enables the detection of object-related visual tokens and their integration into an influence-aware contrastive decoding framework to mitigate both types of biases simultaneously. Our method operates without the need for additional resources, such as costly fine-tuning, extra models, or data statistics. Extensive experiments show it effectively reduces hallucinations, achieving up to a 92% accuracy increase on LLaVA-QA90.
Problem

Research questions and friction points this paper is trying to address.

Mitigating multimodal hallucinations in large language models
Addressing text-visual and co-occurrence bias issues
Detecting object-related visual tokens without extra resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-based self-reflection estimates token influence
Influence-aware contrastive decoding mitigates biases
No additional resources or fine-tuning required
πŸ”Ž Similar Papers
No similar papers found.