HalluShift++: Bridging Language and Vision through Internal Representation Shifts for Hierarchical Hallucinations in MLLMs

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) frequently generate hallucinations inconsistent with input image content; however, existing evaluation methods rely on error-prone external LLM-based evaluators, suffering from limited generalizability and interpretability. Method: We propose the first hierarchical hallucination detection framework grounded in internal model representation dynamics: it quantifies inter-layer visual–language representation shifts to model distributional anomalies and integrates hypothesis testing to localize hallucination sources—without requiring additional annotations. Contribution/Results: Our approach eliminates dependence on external evaluators, enabling cross-modal, fine-grained, and interpretable hallucination identification. Extensive experiments across multiple state-of-the-art MLLMs demonstrate that our method achieves significantly higher detection accuracy than prevailing external-evaluator-based approaches, while exhibiting strong generalization across architectures and tasks.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in vision-language understanding tasks. While these models often produce linguistically coherent output, they often suffer from hallucinations, generating descriptions that are factually inconsistent with the visual content, potentially leading to adverse consequences. Therefore, the assessment of hallucinations in MLLM has become increasingly crucial in the model development process. Contemporary methodologies predominantly depend on external LLM evaluators, which are themselves susceptible to hallucinations and may present challenges in terms of domain adaptation. In this study, we propose the hypothesis that hallucination manifests as measurable irregularities within the internal layer dynamics of MLLMs, not merely due to distributional shifts but also in the context of layer-wise analysis of specific assumptions. By incorporating such modifications, extsc{ extsc{HalluShift++}} broadens the efficacy of hallucination detection from text-based large language models (LLMs) to encompass multimodal scenarios. Our codebase is available at https://github.com/C0mRD/HalluShift_Plus.
Problem

Research questions and friction points this paper is trying to address.

Detects hallucinations in multimodal language models
Analyzes internal layer dynamics for inconsistencies
Extends detection from text to multimodal scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detects hallucinations via internal layer dynamics
Analyzes layer-wise irregularities in multimodal models
Extends detection from text-only to multimodal scenarios