Causal Tracing of Object Representations in Large Vision Language Models: Mechanistic Interpretability and Hallucination Mitigation

📅 2025-11-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current mechanistic interpretability research on large vision-language models (LVLMs) lacks systematic causal analysis across visual/textual tokens, individual layer components, and the full-stack architecture—hindering trustworthy reasoning and hallucination mitigation. To address this, we propose the Fine-grained Cross-modal Causal Tracing (FCCT) framework, the first to identify the critical role of multi-head self-attention (MHSA) at the final token position in intermediate layers for cross-modal aggregation, and to uncover a three-stage hierarchical pattern in feed-forward networks (FFNs) for object representation storage and propagation. Leveraging these insights, we design a training-free Intermediate Representation Injection (IRI) method. Evaluated across five mainstream benchmarks and multiple LVLMs, IRI significantly reduces hallucinations and improves perceptual accuracy, achieving state-of-the-art performance without compromising inference speed or core capabilities.

Technology Category

Application Category

📝 Abstract
Despite the remarkable advancements of Large Vision-Language Models (LVLMs), the mechanistic interpretability remains underexplored. Existing analyses are insufficiently comprehensive and lack examination covering visual and textual tokens, model components, and the full range of layers. This limitation restricts actionable insights to improve the faithfulness of model output and the development of downstream tasks, such as hallucination mitigation. To address this limitation, we introduce Fine-grained Cross-modal Causal Tracing (FCCT) framework, which systematically quantifies the causal effects on visual object perception. FCCT conducts fine-grained analysis covering the full range of visual and textual tokens, three core model components including multi-head self-attention (MHSA), feed-forward networks (FFNs), and hidden states, across all decoder layers. Our analysis is the first to demonstrate that MHSAs of the last token in middle layers play a critical role in aggregating cross-modal information, while FFNs exhibit a three-stage hierarchical progression for the storage and transfer of visual object representations. Building on these insights, we propose Intermediate Representation Injection (IRI), a training-free inference-time technique that reinforces visual object information flow by precisely intervening on cross-modal representations at specific components and layers, thereby enhancing perception and mitigating hallucination. Consistent improvements across five widely used benchmarks and LVLMs demonstrate IRI achieves state-of-the-art performance, while preserving inference speed and other foundational performance.
Problem

Research questions and friction points this paper is trying to address.

Understanding causal mechanisms in large vision-language models remains underexplored
Existing analyses lack comprehensive coverage of model components and layers
Limited insights restrict improvement of model faithfulness and hallucination mitigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-grained Cross-modal Causal Tracing framework
Intermediate Representation Injection technique
Training-free inference-time intervention on representations
🔎 Similar Papers
No similar papers found.