π€ AI Summary
Hallucinations in multimodal large language models (MLLMs) primarily stem from textβvision bias (overreliance on textual cues) and co-occurrence bias (statistical correlations among objects in training data). Existing mitigation methods are largely heuristic and fail to model instance-level variations in bias strength.
Method: We propose the first gradient-based introspective framework that requires no additional training or external resources. It quantifies the influence of textual versus visual cues via token-level gradient analysis, precisely identifies vision-related bias tokens, and dynamically suppresses them. Furthermore, we design an influence-aware contrastive decoding mechanism to jointly mitigate both biases.
Contribution/Results: Our method reduces hallucination rates significantly on LLaVA-QA90, achieving up to a 92% improvement in accuracy. It demonstrates strong cross-model generalizability without architectural modifications or fine-tuning.
π Abstract
Hallucinations in multimodal large language model are caused by the text-visual bias and the co-occurrence bias. The former reflects an over-reliance on text information in the decision-making process, while the latter arises from the statistical object-pairing patterns abstracted from the training data. Existing mitigation methods heuristically address these biases without understanding the fluctuating bias level across the instances. We first propose estimating the influence of respective token types (visual, prompt, and previous outputs) using a gradient-based self-reflection method. The estimated token influence further enables the detection of object-related visual tokens and their integration into an influence-aware contrastive decoding framework to mitigate both types of biases simultaneously. Our method operates without the need for additional resources, such as costly fine-tuning, extra models, or data statistics. Extensive experiments show it effectively reduces hallucinations, achieving up to a 92% accuracy increase on LLaVA-QA90.