Computational Fact-Checking of Online Discourse: Scoring scientific accuracy in climate change related news articles

πŸ“… 2025-05-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Efficiently and transparently assessing scientific accuracy in climate change news remains challenging. Method: This paper proposes a semi-automated fact-checking framework featuring a novel neuro-symbolic architecture that tightly integrates large language model (LLM)-driven claim extraction with domain-specific knowledge graph–based semantic alignment and logical reasoning, enabling fine-grained scientific credibility scoring of news statements against authoritative sources. Contributions/Results: It is the first work to synergistically combine LLMs and knowledge graphs for fact-checking in climate communication; it introduces interpretable, verifiable quantitative metrics; and expert evaluation and user studies confirm significant improvements in both checking efficiency and trustworthiness. The framework already supports evidence-informed public discourse on climate issues. Future work includes developing a FAIR (Findable, Accessible, Interoperable, Reusable) benchmark and scaling the system for broader deployment.

Technology Category

Application Category

πŸ“ Abstract
Democratic societies need reliable information. Misinformation in popular media such as news articles or videos threatens to impair civic discourse. Citizens are, unfortunately, not equipped to verify this content flood consumed daily at increasing rates. This work aims to semi-automatically quantify scientific accuracy of online media. By semantifying media of unknown veracity, their statements can be compared against equally processed trusted sources. We implemented a workflow using LLM-based statement extraction and knowledge graph analysis. Our neurosymbolic system was able to evidently streamline state-of-the-art veracity quantification. Evaluated via expert interviews and a user survey, the tool provides a beneficial veracity indication. This indicator, however, is unable to annotate public media at the required granularity and scale. Further work towards a FAIR (Findable, Accessible, Interoperable, Reusable) ground truth and complementary metrics are required to scientifically support civic discourse.
Problem

Research questions and friction points this paper is trying to address.

Quantify scientific accuracy in climate change news
Compare online media statements against trusted sources
Streamline veracity quantification using neurosymbolic systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based statement extraction for fact-checking
Knowledge graph analysis for veracity comparison
Neurosymbolic system streamlining veracity quantification
πŸ”Ž Similar Papers
No similar papers found.