Profiling News Media for Factuality and Bias Using LLMs and the Fact-Checking Methodology of Human Experts

📅 2025-06-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Assessing the overall credibility of news media amid rampant misinformation remains an underexplored yet critical challenge. Method: We propose the first LLM prompting framework structured around professional fact-checking guidelines, enabling a paradigm shift from sentence-level fact verification to systematic, media-level reliability and political bias profiling. Our approach integrates multi-model collaborative reasoning, expert-guideline-driven response aggregation, and controllable evaluation mechanisms tailored to media-specific attributes. Contributions/Results: Experiments demonstrate significant improvements over strong baselines across multiple mainstream LLMs. We further uncover, for the first time, empirical patterns linking media popularity and geographic distribution to evaluation performance. To foster reproducibility and advancement, we publicly release a high-quality media profiling dataset and implementation code—establishing a new benchmark and toolkit for media credibility modeling.

Technology Category

Application Category

📝 Abstract
In an age characterized by the proliferation of mis- and disinformation online, it is critical to empower readers to understand the content they are reading. Important efforts in this direction rely on manual or automatic fact-checking, which can be challenging for emerging claims with limited information. Such scenarios can be handled by assessing the reliability and the political bias of the source of the claim, i.e., characterizing entire news outlets rather than individual claims or articles. This is an important but understudied research direction. While prior work has looked into linguistic and social contexts, we do not analyze individual articles or information in social media. Instead, we propose a novel methodology that emulates the criteria that professional fact-checkers use to assess the factuality and political bias of an entire outlet. Specifically, we design a variety of prompts based on these criteria and elicit responses from large language models (LLMs), which we aggregate to make predictions. In addition to demonstrating sizable improvements over strong baselines via extensive experiments with multiple LLMs, we provide an in-depth error analysis of the effect of media popularity and region on model performance. Further, we conduct an ablation study to highlight the key components of our dataset that contribute to these improvements. To facilitate future research, we released our dataset and code at https://github.com/mbzuai-nlp/llm-media-profiling.
Problem

Research questions and friction points this paper is trying to address.

Assessing news media factuality and bias using LLMs
Emulating expert fact-checking methods for entire outlets
Improving reliability predictions via LLM responses aggregation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLMs to emulate expert fact-checking criteria
Aggregating LLM responses for media reliability predictions
Designing prompts based on professional fact-checking standards
🔎 Similar Papers
No similar papers found.