Multimodal Fact-Level Attribution for Verifiable Reasoning

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches struggle to perform fine-grained factual attribution for generated content in complex multimodal reasoning, particularly lacking the capability to evaluate attributions across heterogeneous modalities such as video and audio. To address this gap, this work proposes MuRGAt—the first multimodal, fact-level attribution benchmark tailored for complex reasoning scenarios—which requires models to cite precise modality-specific temporal segments for each factual claim during multi-step reasoning. We also introduce an automatic evaluation framework that aligns closely with human judgments. Experiments reveal a pervasive "attribution hallucination" among state-of-the-art multimodal large language models: even when reasoning is correct, attributed sources are often inaccurate. Moreover, increasing reasoning depth or enforcing structured attribution formats further degrades attribution accuracy, highlighting a critical bottleneck in verifiability.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) are increasingly used for real-world tasks involving multi-step reasoning and long-form generation, where reliability requires grounding model outputs in heterogeneous input sources and verifying individual factual claims. However, existing multimodal grounding benchmarks and evaluation methods focus on simplified, observation-based scenarios or limited modalities and fail to assess attribution in complex multimodal reasoning. We introduce MuRGAt (Multimodal Reasoning with Grounded Attribution), a benchmark for evaluating fact-level multimodal attribution in settings that require reasoning beyond direct observation. Given inputs spanning video, audio, and other modalities, MuRGAt requires models to generate answers with explicit reasoning and precise citations, where each citation specifies both modality and temporal segments. To enable reliable assessment, we introduce an automatic evaluation framework that strongly correlates with human judgments. Benchmarking with human and automated scores reveals that even strong MLLMs frequently hallucinate citations despite correct reasoning. Moreover, we observe a key trade-off: increasing reasoning depth or enforcing structured grounding often degrades accuracy, highlighting a significant gap between internal reasoning and verifiable attribution.
Problem

Research questions and friction points this paper is trying to address.

multimodal reasoning
fact-level attribution
verifiable reasoning
grounding
hallucination
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal attribution
fact-level grounding
verifiable reasoning
MuRGAt benchmark
citation hallucination
🔎 Similar Papers
No similar papers found.