🤖 AI Summary
Existing open-source models exhibit limited performance on Scientific Visual Question Answering (SVQA), primarily due to the absence of large-scale, high-quality public SVQA datasets; meanwhile, mainstream Large Vision-Language Model (LVLM)-based synthetic data generation methods suffer from modality bias and hallucination, leading to systematic errors in generated question-answer (QA) pairs. To address this, we propose a “Generate-then-Verify” framework that employs cross-modal consistency verification—joint visual-textual reasoning—and a multi-stage filtering mechanism to systematically detect and eliminate low-quality samples. Leveraging this framework, we construct VeriSciQA, a rigorously validated dataset comprising 20,351 high-fidelity QA pairs spanning 20 scientific domains and 12 chart types, achieving a human-evaluated accuracy of 94.2%. Fine-tuning open-source models on VeriSciQA yields substantial improvements over prior state-of-the-art across multiple SVQA benchmarks, establishing the first large-scale, human-verified benchmark for scientific VQA.
📝 Abstract
Large Vision-Language Models (LVLMs) show promise for scientific applications, yet open-source models still struggle with Scientific Visual Question Answering (SVQA), namely answering questions about figures from scientific papers. A key bottleneck lies in the lack of public, large-scale, high-quality SVQA datasets. Although recent work uses LVLMs to synthesize data at scale, we identify systematic errors in their resulting QA pairs, stemming from LVLMs' inherent limitations and information asymmetry between figures and text. To address these challenges, we propose a verification-centric Generate-then-Verify framework that first generates QA pairs with figure-associated textual context, then applies cross-modal consistency checks against figures along with auxiliary filters to eliminate erroneous pairs. We instantiate this framework to curate VeriSciQA, a dataset of 20,351 QA pairs spanning 20 scientific domains and 12 figure types. VeriSciQA poses a challenging benchmark for open-source models, with a substantial accuracy gap between the leading open-source models (64%) and a proprietary model (82%). Moreover, models fine-tuned on VeriSciQA achieve consistent improvements on SVQA benchmarks, with performance gains that scale with data size and surpass models trained on existing datasets. Human evaluation further validates the superior correctness of VeriSciQA. Together, these evidences demonstrate that continued data expansion by our scalable framework can further advance SVQA capability in the open-source community.