🤖 AI Summary
This study investigates the capability of large language models (LLMs) to automatically detect misinformation in scientific news under realistic, claim-agnostic settings—i.e., without explicit claim annotations. Method: We introduce SciNews, the first benchmark dataset for scientific credibility assessment, comprising 2.4k science news articles aligned with their corresponding research abstracts, including both human-written and LLM-generated texts. We propose an end-to-end LLM-based discrimination framework that eliminates reliance on manual claim extraction and instead models scientific validity across multiple dimensions, enabling joint evaluation of human- and AI-authored news. Using GPT-3.5/4 and Llama2-7B/13B with zero-shot, few-shot, and chain-of-thought prompting, we systematically evaluate cross-style (research abstract ↔ popular news) hallucination detection. Results: GPT-4 achieves 82.3% accuracy on SciNews—significantly outperforming baselines—demonstrating the practical potential of LLMs for scalable, automated scientific fact-checking.
📝 Abstract
Scientific facts are often spun in the popular press with the intent to influence public opinion and action, as was evidenced during the COVID-19 pandemic. Automatic detection of misinformation in the scientific domain is challenging because of the distinct styles of writing in these two media types and is still in its nascence. Most research on the validity of scientific reporting treats this problem as a claim verification challenge. In doing so, significant expert human effort is required to generate appropriate claims. Our solution bypasses this step and addresses a more real-world scenario where such explicit, labeled claims may not be available. The central research question of this paper is whether it is possible to use large language models (LLMs) to detect misinformation in scientific reporting. To this end, we first present a new labeled dataset SciNews, containing 2.4k scientific news stories drawn from trusted and untrustworthy sources, paired with related abstracts from the CORD-19 database. Our dataset includes both human-written and LLM-generated news articles, making it more comprehensive in terms of capturing the growing trend of using LLMs to generate popular press articles. Then, we identify dimensions of scientific validity in science news articles and explore how this can be integrated into the automated detection of scientific misinformation. We propose several baseline architectures using LLMs to automatically detect false representations of scientific findings in the popular press. For each of these architectures, we use several prompt engineering strategies including zero-shot, few-shot, and chain-of-thought prompting. We also test these architectures and prompting strategies on GPT-3.5, GPT-4, and Llama2-7B, Llama2-13B.