🤖 AI Summary
This work addresses the lack of reliable tools for evaluating online news credibility, which hinders readers’ ability to make informed judgments. To this end, we develop AutoJudge—an automated evaluation pipeline for the TREC 2025 DRAGUN track—built upon the MS MARCO V2.1 corpus and a human-curated scoring rubric that assigns importance-weighted scores across 30 news articles. AutoJudge establishes the first standardized human-annotated benchmark and highly correlated automatic evaluation mechanism specifically tailored for retrieval-augmented generation (RAG) systems oriented toward news credibility assessment. Experimental results demonstrate strong agreement between AutoJudge and human evaluations, achieving Kendall’s τ coefficients of 0.678 and 0.872 on two distinct tasks, thereby offering an effective and reproducible framework for the automated evaluation and optimization of credibility-focused RAG systems.
📝 Abstract
Many readers today struggle to assess the trustworthiness of online news because reliable reporting coexists with misinformation. The TREC 2025 DRAGUN (Detection, Retrieval, and Augmented Generation for Understanding News) Track provided a venue for researchers to develop and evaluate assistive RAG systems that support readers'news trustworthiness assessment by producing reader-oriented, well-attributed reports. As the organizers of the DRAGUN track, we describe the resources that we have newly developed to allow for the reuse of the track's tasks. The track had two tasks: (Task 1) Question Generation, producing 10 ranked investigative questions; and (Task 2, the main task) Report Generation, producing a 250-word report grounded in the MS MARCO V2.1 Segmented Corpus. As part of the track's evaluation, we had TREC assessors create importance-weighted rubrics of questions with expected short answers for 30 different news articles. These rubrics represent the information that assessors believe is important for readers to assess an article's trustworthiness. The assessors then used their rubrics to manually judge the participating teams'submitted runs. To make these tasks and their rubrics reusable, we have created an automated process to judge runs not part of the original assessing. We show that our AutoJudge ranks existing runs well compared to the TREC human-assessed evaluation (Kendall's $\tau = 0.678$ for Task 1 and $\tau = 0.872$ for Task 2). These resources enable both the evaluation of RAG systems for assistive news trustworthiness assessment and, with the human evaluation as a benchmark, research on improving automated RAG evaluation.