🤖 AI Summary
Large Vision-Language Models (LVLMs) suffer from pervasive hallucination, stemming from dynamic, multi-path interactions—including image-to-input text, image-to-output text, and text-to-text pathways—whose relative dominance shifts with question-answering alignment formats (e.g., discriminative vs. generative). This work is the first to uncover the format-dependent causal pathway mechanism underlying LVLM hallucination. We propose a unified, pathway-customized intervention framework grounded in Transformer causal structural analysis: it decomposes computation paths and identifies critical hallucination-prone attention heads, enabling format-aware, multi-path协同 intervention. Evaluated across multiple benchmarks, our method significantly reduces hallucination rates across diverse alignment formats, demonstrating strong effectiveness, cross-model and cross-task generalizability, and interpretability via explicit pathway attribution.
📝 Abstract
Despite their impressive performance across a wide range of tasks, Large Vision-Language Models (LVLMs) remain prone to hallucination. In this study, we propose a comprehensive intervention framework aligned with the transformer's causal architecture in LVLMs, integrating the effects of different intervention paths on hallucination. We find that hallucinations in LVLMs do not arise from a single causal path, but rather from the interplay among image-to-input-text, image-to-output-text, and text-to-text pathways. For the first time, we also find that LVLMs rely on different pathways depending on the question-answer alignment format. Building on these insights, we propose simple yet effective methods to identify and intervene on critical hallucination heads within each pathway, tailored to discriminative and generative formats. Experiments across multiple benchmarks demonstrate that our approach consistently reduces hallucinations across diverse alignment types.