Intervene-All-Paths: Unified Mitigation of LVLM Hallucinations across Alignment Formats

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Vision-Language Models (LVLMs) suffer from pervasive hallucination, stemming from dynamic, multi-path interactions—including image-to-input text, image-to-output text, and text-to-text pathways—whose relative dominance shifts with question-answering alignment formats (e.g., discriminative vs. generative). This work is the first to uncover the format-dependent causal pathway mechanism underlying LVLM hallucination. We propose a unified, pathway-customized intervention framework grounded in Transformer causal structural analysis: it decomposes computation paths and identifies critical hallucination-prone attention heads, enabling format-aware, multi-path协同 intervention. Evaluated across multiple benchmarks, our method significantly reduces hallucination rates across diverse alignment formats, demonstrating strong effectiveness, cross-model and cross-task generalizability, and interpretability via explicit pathway attribution.

Technology Category

Application Category

📝 Abstract
Despite their impressive performance across a wide range of tasks, Large Vision-Language Models (LVLMs) remain prone to hallucination. In this study, we propose a comprehensive intervention framework aligned with the transformer's causal architecture in LVLMs, integrating the effects of different intervention paths on hallucination. We find that hallucinations in LVLMs do not arise from a single causal path, but rather from the interplay among image-to-input-text, image-to-output-text, and text-to-text pathways. For the first time, we also find that LVLMs rely on different pathways depending on the question-answer alignment format. Building on these insights, we propose simple yet effective methods to identify and intervene on critical hallucination heads within each pathway, tailored to discriminative and generative formats. Experiments across multiple benchmarks demonstrate that our approach consistently reduces hallucinations across diverse alignment types.
Problem

Research questions and friction points this paper is trying to address.

Mitigating hallucinations in Large Vision-Language Models across different alignment formats
Addressing interplay between image-text and text-text pathways causing LVLM hallucinations
Identifying and intervening on critical hallucination heads in transformer architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

Intervenes on multiple causal pathways in LVLMs
Targets critical hallucination heads in transformer architecture
Tailors mitigation to discriminative and generative formats
🔎 Similar Papers
No similar papers found.