User-Centric Evidence Ranking for Attribution and Fact Verification

πŸ“… 2026-01-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a user-centered evidence ranking task to enhance fact-checking systems, which often suffer from insufficient or redundant evidence that impairs verification efficiency and accuracy. The approach prioritizes presenting sufficient and non-redundant evidence to support progressive verification while reducing users’ cognitive load. To this end, the authors establish a unified benchmark and an evaluation framework that balances early sufficiency and accessibility. They design both one-shot and incremental ranking strategies, integrating large language models with information retrieval metrics. Experimental results demonstrate that incremental ranking more effectively captures complementary evidence, and large language models substantially outperform shallow baselines. A user study further confirms that the proposed method improves verification accuracy while significantly reducing the amount of text users need to read.

Technology Category

Application Category

πŸ“ Abstract
Attribution and fact verification are critical challenges in natural language processing for assessing information reliability. While automated systems and Large Language Models (LLMs) aim to retrieve and select concise evidence to support or refute claims, they often present users with either insufficient or overly redundant information, leading to inefficient and error-prone verification. To address this, we propose Evidence Ranking, a novel task that prioritizes presenting sufficient information as early as possible in a ranked list. This minimizes user reading effort while still making all available evidence accessible for sequential verification. We compare two approaches for the new ranking task: one-shot ranking and incremental ranking. We introduce a new evaluation framework, inspired by information retrieval metrics, and construct a unified benchmark by aggregating existing fact verification datasets. Extensive experiments with diverse models show that incremental ranking strategies better capture complementary evidence and that LLM-based methods outperform shallower baselines, while still facing challenges in balancing sufficiency and redundancy. Compared to evidence selection, we conduct a controlled user study and demonstrate that evidence ranking both reduces reading effort and improves verification. This work provides a foundational step toward more interpretable, efficient, and user-aligned information verification systems.
Problem

Research questions and friction points this paper is trying to address.

attribution
fact verification
evidence ranking
user-centric
information redundancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evidence Ranking
User-Centric Verification
Incremental Ranking
Fact Verification
LLM-based Evidence Retrieval
πŸ”Ž Similar Papers
No similar papers found.