Measuring Large Language Models Capacity to Annotate Journalistic Sourcing

📅 2024-12-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of evaluating large language models’ (LLMs) ability to identify, classify, and verify information sources in news reporting—critical for journalistic transparency and ethical practice. Methodologically, we introduce the first fine-grained evaluation framework for news provenance: grounded in the five canonical journalistic source categories, it formalizes a three-tier task hierarchy—quoted statement identification, source type classification, and justification legitimacy detection—and proposes an ethics-aware annotation schema alongside multi-level accuracy metrics (statement/source/justification). We release the first human-annotated, diverse-source news provenance benchmark dataset. Experimental results show that state-of-the-art LLMs achieve moderate performance on quoted statement identification and source classification but exhibit significant deficiencies in detecting ethical justifications—highlighting an urgent need for improved alignment with journalistic ethics.

Technology Category

Application Category

📝 Abstract
Since the launch of ChatGPT in late 2022, the capacities of Large Language Models and their evaluation have been in constant discussion and evaluation both in academic research and in the industry. Scenarios and benchmarks have been developed in several areas such as law, medicine and math (Bommasani et al., 2023) and there is continuous evaluation of model variants. One area that has not received sufficient scenario development attention is journalism, and in particular journalistic sourcing and ethics. Journalism is a crucial truth-determination function in democracy (Vincent, 2023), and sourcing is a crucial pillar to all original journalistic output. Evaluating the capacities of LLMs to annotate stories for the different signals of sourcing and how reporters justify them is a crucial scenario that warrants a benchmark approach. It offers potential to build automated systems to contrast more transparent and ethically rigorous forms of journalism with everyday fare. In this paper we lay out a scenario to evaluate LLM performance on identifying and annotating sourcing in news stories on a five-category schema inspired from journalism studies (Gans, 2004). We offer the use case, our dataset and metrics and as the first step towards systematic benchmarking. Our accuracy findings indicate LLM-based approaches have more catching to do in identifying all the sourced statements in a story, and equally, in matching the type of sources. An even harder task is spotting source justifications.
Problem

Research questions and friction points this paper is trying to address.

Language Models
News Source Identification
Journalistic Ethics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
News Source Annotation
Ethical Journalism
🔎 Similar Papers
No similar papers found.
Subramaniam Vincent
Subramaniam Vincent
Director, Journalism and Media Ethics, Markkula Center for Applied Ethics, Santa Clara University
Journalismjournalism ethicsmedia ethicsartificial intelligencedigital media technology
P
Phoebe Wang
Department of Computer Science and Engineering, Santa Clara University, Santa Clara, CA
Z
Zhan Shi
Department of Computer Science and Engineering, Santa Clara University, Santa Clara, CA
S
Sahas Koka
Dublin High School, Dublin, CA
Y
Yi Fang
Department of Computer Science and Engineering, Santa Clara University, Santa Clara, CA