🤖 AI Summary
Scientific claim verification faces challenges including complex evidence structures, domain-specific terminology, and a scarcity of high-quality benchmark datasets. To address these, we introduce SciClaimHunt—the first large-scale, multi-disciplinary, fine-grained annotated dataset for scientific claim verification—along with its numerically enhanced variant, SciClaimHunt_Num, both derived from real scholarly publications via automated extraction, rule-based filtering, and rigorous human validation. We further propose a dedicated baseline model tailored to scientific text, integrating domain-adapted BERT, evidence retrieval augmentation, and a multi-stage verification architecture. Experiments demonstrate substantial improvements over existing methods in cross-dataset generalization, domain-term comprehension, and evidence-claim alignment accuracy; human evaluation yields a strong inter-annotator agreement of 92.3%. SciClaimHunt has emerged as one of the most authoritative benchmarks for scientific claim verification to date.
📝 Abstract
Verifying scientific claims presents a significantly greater challenge than verifying political or news-related claims. Unlike the relatively broad audience for political claims, the users of scientific claim verification systems can vary widely, ranging from researchers testing specific hypotheses to everyday users seeking information on a medication. Additionally, the evidence for scientific claims is often highly complex, involving technical terminology and intricate domain-specific concepts that require specialized models for accurate verification. Despite considerable interest from the research community, there is a noticeable lack of large-scale scientific claim verification datasets to benchmark and train effective models. To bridge this gap, we introduce two large-scale datasets, SciClaimHunt and SciClaimHunt_Num, derived from scientific research papers. We propose several baseline models tailored for scientific claim verification to assess the effectiveness of these datasets. Additionally, we evaluate models trained on SciClaimHunt and SciClaimHunt_Num against existing scientific claim verification datasets to gauge their quality and reliability. Furthermore, we conduct human evaluations of the claims in proposed datasets and perform error analysis to assess the effectiveness of the proposed baseline models. Our findings indicate that SciClaimHunt and SciClaimHunt_Num serve as highly reliable resources for training models in scientific claim verification.