The SMeL Test: A simple benchmark for media literacy in language models

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical deficiency of large language models (LLMs) in identifying and filtering untrustworthy information during autonomous web browsing. To systematically evaluate LLMs’ media literacy—their ability to actively discern and suppress false or low-quality content within context—we introduce SMeL Test, the first synthetic benchmark specifically designed for this purpose. We evaluate diverse instruction-tuned and reasoning-capable LLMs, including state-of-the-art API-based models. Key findings are: (1) enhanced reasoning capabilities yield only marginal improvements in media literacy scores and fail to eliminate hallucinations; (2) model scale exhibits no significant positive correlation with media literacy performance; and (3) even the best-performing API model suffers a 70% hallucination rate and consistently struggles to reliably prioritize trustworthy sources. These results expose a fundamental limitation of current LLMs in open-domain credibility assessment and establish a novel paradigm for media-literacy–oriented model evaluation and alignment.

Technology Category

Application Category

📝 Abstract
The internet is rife with unattributed, deliberately misleading, or otherwise untrustworthy content. Though large language models (LLMs) are often tasked with autonomous web browsing, the extent to which they have learned the simple heuristics human researchers use to navigate this noisy environment is not currently known. In this paper, we introduce the Synthetic Media Literacy Test (SMeL Test), a minimal benchmark that tests the ability of language models to actively filter out untrustworthy information in context. We benchmark a variety of commonly used instruction-tuned LLMs, including reasoning models, and find that no model consistently trusts more reliable sources; while reasoning in particular is associated with higher scores, even the best API model we test hallucinates up to 70% of the time. Remarkably, larger and more capable models do not necessarily outperform their smaller counterparts. We hope our work sheds more light on this important form of hallucination and guides the development of new methods to combat it.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' ability to filter untrustworthy online content
Evaluating media literacy in models via SMeL Test benchmark
Investigating hallucination rates in models handling unreliable sources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Synthetic Media Literacy Test benchmark
Tests LLMs on filtering untrustworthy information
Evaluates various instruction-tuned LLMs performance
🔎 Similar Papers
No similar papers found.
G
Gustaf Ahdritz
Kempner Institute, Harvard University
Anat Kleiman
Anat Kleiman
Harvard University