🤖 AI Summary
Existing scientific fact-checking retrieval methods rank documents solely based on textual relevance, ignoring whether they provide supporting or refuting evidence for a claim—leading to suboptimal evidence relevance. To address this, we propose a fine-grained evidence assessment framework tailored for scientific fact-checking, which explicitly models verification success—i.e., a document’s capacity to substantiate or refute the claim—as the core signal for relevance ranking. Our approach synergistically integrates information retrieval with natural language inference, leveraging outputs from a verification model to dynamically re-rank retrieved documents. This breaks away from conventional surface-level matching paradigms. Empirical evaluation demonstrates state-of-the-art evidence retrieval performance on three benchmark datasets—SciFact, SciFact-Open, and Check-Covid—and yields significant improvements in downstream fact-checking accuracy.
📝 Abstract
Identification of appropriate supporting evidence is critical to the success of scientific fact checking. However, existing approaches rely on off-the-shelf Information Retrieval algorithms that rank documents based on relevance rather than the evidence they provide to support or refute the claim being checked. This paper proposes +VeriRel which includes verification success in the document ranking. Experimental results on three scientific fact checking datasets (SciFact, SciFact-Open and Check-Covid) demonstrate consistently leading performance by +VeriRel for document evidence retrieval and a positive impact on downstream verification. This study highlights the potential of integrating verification feedback to document relevance assessment for effective scientific fact checking systems. It shows promising future work to evaluate fine-grained relevance when examining complex documents for advanced scientific fact checking.