D-LEAF: Localizing and Correcting Hallucinations in Multimodal LLMs via Layer-to-head Attention Diagnostics

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) frequently generate hallucinations characterized by visual–textual inconsistency, and existing methods struggle to precisely localize cross-layer attention anomalies responsible for such hallucinations. To address this, we propose the Dynamic Layer-wise Entropy and Attention Fusion (D-LEAF) framework, enabling task-agnostic hallucination localization and correction at inference time with minimal computational overhead. D-LEAF introduces two novel diagnostic metrics—Layer-wise Image Attention Entropy (LIAE) and Image Attention Focus (IAF)—which jointly and adaptively identify the erroneous transformer layer and critical attention head(s) underlying hallucination. Evaluated on image captioning benchmarks, D-LEAF achieves a 53% relative improvement in hallucination reduction; on VQA tasks, it boosts both accuracy and F1-score by approximately 4%. The method effectively suppresses hallucinations while preserving inference efficiency and model generality.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) achieve strong performance on tasks like image captioning and visual question answering, but remain prone to hallucinations, where generated text conflicts with the visual input. Prior work links this partly to insufficient visual attention, but existing attention-based detectors and mitigation typically apply uniform adjustments across layers and heads, obscuring where errors originate. In this paper, we first show these methods fail to accurately localize problematic layers. Then, we introduce two diagnostics: Layer Image Attention Entropy (LIAE) which flags anomalous layers, and Image Attention Focus (IAF) which scores attention heads within those layers. Analysis shows that LIAE pinpoints faulty layers and IAF reliably ranks heads that warrant correction. Guided by these signals, we propose Dynamic Layer-wise Entropy and Attention Fusion (D-LEAF), a task-agnostic, attention-guided method that dynamically localizes and corrects errors during inference with negligible overhead. Results show our D-LEAF delivers a 53% relative improvement on standard captioning benchmarks, and on VQA both accuracy and F1-score improve by approximately 4%, substantially suppressing hallucinations while preserving efficiency.
Problem

Research questions and friction points this paper is trying to address.

Localizing hallucinations in multimodal LLMs
Correcting attention errors in specific layers
Improving accuracy while maintaining efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses entropy and focus diagnostics to pinpoint errors
Dynamically localizes and corrects hallucinations during inference
Applies layer-wise attention fusion with minimal overhead
🔎 Similar Papers
No similar papers found.
T
Tiancheng Yang
MBZUAI, School of Advanced Interdisciplinary Sciences, University of Chinese Academy of Sciences
L
Lin Zhang
Provable Responsible AI and Data Analytics (PRADA) Lab, King Abdullah University of Science and Technology
J
Jiaye Lin
Tsinghua University
Guimin Hu
Guimin Hu
University of Copenhagen
Multimodal LearningNatural Language ProcessingAffective ComputingHaptic Understanding
D
Di Wang
Provable Responsible AI and Data Analytics (PRADA) Lab, King Abdullah University of Science and Technology
Lijie Hu
Lijie Hu
Assistant Professor, MBZUAI
Explainable AILLMDifferential Privacy