🤖 AI Summary
Traditional automated tools struggle to replace journalists in assessing news value amid massive, rapidly updating digital information streams.
Method: This paper proposes a prompt engineering framework that encodes journalistic news values—timeliness, impact, controversy, and universality—to transform large language models (LLMs) into an intelligent first-stage filter for news线索 detection, integrated into real-world news monitoring pipelines to enable human-AI hybrid decision-making.
Contribution/Results: Validated on manually annotated data, the system achieves an F1-score of 0.94 for线索 extraction and 92% accuracy in coarse-grained news value assessment on industry media’s daily-updated content, significantly improving high-value线索 discovery while suppressing noise. Its core innovation lies in computationally formalizing journalistic principles into structured prompts, establishing a reusable methodological paradigm for augmenting professional editorial judgment with LLMs.
📝 Abstract
Journalists face mounting challenges in monitoring ever-expanding digital information streams to identify newsworthy content. While traditional automation tools gather information at scale, they struggle with the editorial judgment needed to assess newsworthiness. This paper investigates whether large language models (LLMs) can serve as effective first-pass filters for journalistic monitoring. We develop a prompt-based approach encoding journalistic news values - timeliness, impact, controversy, and generalizability - into LLM instructions to extract and evaluate potential story leads. We validate our approach across multiple models against expert-annotated ground truth, then deploy a real-world monitoring pipeline that processes trade press articles daily. Our evaluation reveals strong performance in extracting relevant leads from source material ($F1=0.94$) and in coarse newsworthiness assessment ($pm$1 accuracy up to 92%), but it consistently struggles with nuanced editorial judgments requiring beat expertise. The system proves most valuable as a hybrid tool combining automated monitoring with human review, successfully surfacing novel, high-value leads while filtering obvious noise. We conclude with practical recommendations for integrating LLM-powered monitoring into newsroom workflows that preserves editorial judgment while extending journalistic capacity.