Ev2R: Evaluating Evidence Retrieval in Automated Fact-Checking

📅 2024-11-08
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Existing evidence evaluation methods for automated fact-checking (AFC) suffer from two key limitations: they either indirectly infer evidence quality from final verdicts or rely on exact matching against closed knowledge sources (e.g., Wikipedia), resulting in narrow, low-generalizability assessments. To address this, we propose Ev2R—the first evaluation framework explicitly designed to assess evidence retrieval quality—establishing a tripartite paradigm: reference-based, proxy-reference-based, and reference-free evaluation. We innovatively introduce an LLM-driven, prompt-based scorer, rigorously validated through human evaluation alignment and adversarial testing. Experiments across multiple datasets demonstrate that our scorer achieves over 35% higher correlation with human judgments than conventional metrics (e.g., ROUGE, BERTScore) and exhibits superior robustness against noise and perturbations.

Technology Category

Application Category

📝 Abstract
Current automated fact-checking (AFC) approaches commonly evaluate evidence either implicitly via the predicted verdicts or by comparing retrieved evidence with a predefined closed knowledge source, such as Wikipedia. However, these methods suffer from limitations, resulting from their reliance on evaluation metrics developed for different purposes and constraints imposed by closed knowledge sources. Recent advances in natural language generation (NLG) evaluation offer new possibilities for evidence assessment. In this work, we introduce Ev2R, an evaluation framework for AFC that comprises three types of approaches for evidence evaluation: reference-based, proxy-reference, and reference-less. We evaluate their effectiveness through agreement with human ratings and adversarial tests, and demonstrate that prompt-based scorers, particularly those leveraging LLMs and reference evidence, outperform traditional evaluation approaches.
Problem

Research questions and friction points this paper is trying to address.

Evaluating evidence retrieval in automated fact-checking systems
Overcoming limitations of closed knowledge source reliance
Assessing evidence alignment with references and verdict support
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines reference-based and verdict-level evaluation
Assesses evidence alignment and verdict support
Outperforms in accuracy and robustness
🔎 Similar Papers
No similar papers found.
Mubashara Akhtar
Mubashara Akhtar
ETH AI Center fellow at ETH Zurich
NLPMultimodalityBenchmarking & Evaluation
M
Michael Schlichtkrull
School of Electronic Engineering and Computer Science, Queen Mary University of London
A
Andreas Vlachos
Department of Computer Science and Technology, University of Cambridge