MAVIS: A Benchmark for Multimodal Source Attribution in Long-form Visual Question Answering

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing work primarily addresses textual source attribution, neglecting multimodal settings. This paper tackles multimodal source attribution in long-form visual question answering (LVQA), proposing MAVIS—the first dedicated benchmark for evaluating intent understanding, cross-modal evidence retrieval, and reference generation. Its contributions are threefold: (1) a large-scale, human-annotated dataset; (2) fine-grained automated evaluation metrics that uncover contextual bias in image documents for the first time; and (3) a multimodal large language model–based framework integrating retrieval-augmented generation (RAG) for joint image-text retrieval and reference generation, featuring fact-level reference annotation. Experiments demonstrate substantial improvements in answer informativeness and credibility. However, image grounding remains weaker than text grounding, and prompt engineering reveals an inherent trade-off between informativeness and groundedness.

Technology Category

Application Category

📝 Abstract
Source attribution aims to enhance the reliability of AI-generated answers by including references for each statement, helping users validate the provided answers. However, existing work has primarily focused on text-only scenario and largely overlooked the role of multimodality. We introduce MAVIS, the first benchmark designed to evaluate multimodal source attribution systems that understand user intent behind visual questions, retrieve multimodal evidence, and generate long-form answers with citations. Our dataset comprises 157K visual QA instances, where each answer is annotated with fact-level citations referring to multimodal documents. We develop fine-grained automatic metrics along three dimensions of informativeness, groundedness, and fluency, and demonstrate their strong correlation with human judgments. Our key findings are threefold: (1) LVLMs with multimodal RAG generate more informative and fluent answers than unimodal RAG, but they exhibit weaker groundedness for image documents than for text documents, a gap amplified in multimodal settings. (2) Given the same multimodal documents, there is a trade-off between informativeness and groundedness across different prompting methods. (3) Our proposed method highlights mitigating contextual bias in interpreting image documents as a crucial direction for future research. The dataset and experimental code are available at https://github.com/seokwon99/MAVIS
Problem

Research questions and friction points this paper is trying to address.

Evaluating multimodal source attribution systems for visual question answering
Addressing the gap in multimodal evidence retrieval and citation generation
Mitigating contextual bias in interpreting image documents for grounded answers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal source attribution benchmark for visual QA
Fine-grained automatic metrics for groundedness evaluation
Mitigating contextual bias in image document interpretation
🔎 Similar Papers
No similar papers found.