SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers

๐Ÿ“… 2024-07-12
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 4
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing multimodal question-answering (QA) benchmarks for scientific papers are scarce, especially those tailored to middle-school studentsโ€™ cognitive levels and emphasizing integrated text-image comprehension. This work introduces SPIQAโ€”the first large-scale multimodal QA dataset for scientific papers, comprising 270K high-quality questions grounded in diverse visual elements (e.g., charts, tables, diagrams) and spanning interdisciplinary research literature. Methodologically, we propose a hybrid annotation paradigm combining multi-stage prompt-engineered automatic labeling with human refinement, supported by a retrieval-augmented chain-of-thought (CoT) evaluation framework powered by multimodal large language models (MLLMs). This enables fine-grained performance analysis across reasoning dimensions. We conduct systematic evaluation on 12 state-of-the-art MLLMs, demonstrating that context-aware retrieval and incorporation of auxiliary textual information significantly elevate model reasoning ceilings.

Technology Category

Application Category

๐Ÿ“ Abstract
Seeking answers to questions within long scientific research articles is a crucial area of study that aids readers in quickly addressing their inquiries. However, existing question-answering (QA) datasets based on scientific papers are limited in scale and focus solely on textual content. We introduce SPIQA (Scientific Paper Image Question Answering), the first large-scale QA dataset specifically designed to interpret complex figures and tables within the context of scientific research articles across various domains of computer science. Leveraging the breadth of expertise and ability of multimodal large language models (MLLMs) to understand figures, we employ automatic and manual curation to create the dataset. We craft an information-seeking task on interleaved images and text that involves multiple images covering plots, charts, tables, schematic diagrams, and result visualizations. SPIQA comprises 270K questions divided into training, validation, and three different evaluation splits. Through extensive experiments with 12 prominent foundational models, we evaluate the ability of current multimodal systems to comprehend the nuanced aspects of research articles. Additionally, we propose a Chain-of-Thought (CoT) evaluation strategy with in-context retrieval that allows fine-grained, step-by-step assessment and improves model performance. We further explore the upper bounds of performance enhancement with additional textual information, highlighting its promising potential for future research and the dataset's impact on revolutionizing how we interact with scientific literature.
Problem

Research questions and friction points this paper is trying to address.

Scientific Paper Understanding
Image-Text Question Answering
Middle School Level Dataset
Innovation

Methods, ideas, or system contributions that make the work stand out.

SPIQA
Multimodal Scientific Content Understanding
Integrated Information Retrieval
๐Ÿ”Ž Similar Papers
No similar papers found.