MCiteBench: A Benchmark for Multimodal Citation Text Generation in MLLMs

📅 2025-03-04
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) suffer from pervasive attribution errors in multimodal citation generation, leading to severe hallucination and poor traceability. Method: We introduce MCiteBench—the first benchmark for multimodal citation evaluation tailored to academic peer review—constructed from real papers and rebuttals with mixed-image-and-text samples. It systematically defines and evaluates MLLMs’ ability to generate source-attributed text within multimodal contexts, revealing attribution localization—not multimodal understanding—as the primary bottleneck. Our three-dimensional evaluation framework assesses cross-modal citation quality, source reliability, and answer accuracy, integrating human verification with automated metrics. Results: Experiments show that state-of-the-art MLLMs achieve less than 40% attribution accuracy—significantly lower than in pure-text settings—establishing MCiteBench as a reproducible benchmark and diagnostic tool for trustworthy multimodal generation.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have advanced in integrating diverse modalities but frequently suffer from hallucination. A promising solution to mitigate this issue is to generate text with citations, providing a transparent chain for verification. However, existing work primarily focuses on generating citations for text-only content, overlooking the challenges and opportunities of multimodal contexts. To address this gap, we introduce MCiteBench, the first benchmark designed to evaluate and analyze the multimodal citation text generation ability of MLLMs. Our benchmark comprises data derived from academic papers and review-rebuttal interactions, featuring diverse information sources and multimodal content. We comprehensively evaluate models from multiple dimensions, including citation quality, source reliability, and answer accuracy. Through extensive experiments, we observe that MLLMs struggle with multimodal citation text generation. We also conduct deep analyses of models' performance, revealing that the bottleneck lies in attributing the correct sources rather than understanding the multimodal content.
Problem

Research questions and friction points this paper is trying to address.

Evaluates multimodal citation text generation in MLLMs
Addresses hallucination issues by generating verifiable citations
Identifies challenges in attributing correct sources in multimodal contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MCiteBench for multimodal citation evaluation
Evaluates MLLMs on citation quality and source reliability
Identifies source attribution as key performance bottleneck
🔎 Similar Papers
No similar papers found.
Caiyu Hu
Caiyu Hu
Fudan University
NLPLarge Language Models
Yikai Zhang
Yikai Zhang
Fudan university
Natural Language ProcessingAutonomous Agent
Tinghui Zhu
Tinghui Zhu
University of California, Davis
Natural Language ProcessingVision-Language Models
Y
Yiwei Ye
School of Computer Engineering and Science, Shanghai University
Y
Yanghua Xiao
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University