Exposing Pink Slime Journalism: Linguistic Signatures and Robust Detection Against LLM-Generated Threats

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Local news is increasingly threatened by “pink slime journalism”—low-quality, automatically generated content masquerading as legitimate reporting. Large language models (LLMs) enable adversarial paraphrasing that significantly degrades the performance of existing pink slime detectors. This paper presents the first systematic analysis of how LLM-driven adversarial rewriting undermines detection, revealing underlying linguistic mechanisms through feature analysis and adversarial sample training. We propose a robust, adaptive detection framework integrating fine-grained stylistic modeling, transferable feature extraction, and an adversarially enhanced classifier. Experiments show that state-of-the-art detectors suffer up to a 40% drop in F1 score under LLM-based attacks; our framework not only fully recovers baseline performance but achieves up to a 27% absolute improvement, markedly strengthening resilience against generative disinformation.

Technology Category

Application Category

📝 Abstract
The local news landscape, a vital source of reliable information for 28 million Americans, faces a growing threat from Pink Slime Journalism, a low-quality, auto-generated articles that mimic legitimate local reporting. Detecting these deceptive articles requires a fine-grained analysis of their linguistic, stylistic, and lexical characteristics. In this work, we conduct a comprehensive study to uncover the distinguishing patterns of Pink Slime content and propose detection strategies based on these insights. Beyond traditional generation methods, we highlight a new adversarial vector: modifications through large language models (LLMs). Our findings reveal that even consumer-accessible LLMs can significantly undermine existing detection systems, reducing their performance by up to 40% in F1-score. To counter this threat, we introduce a robust learning framework specifically designed to resist LLM-based adversarial attacks and adapt to the evolving landscape of automated pink slime journalism, and showed and improvement by up to 27%.
Problem

Research questions and friction points this paper is trying to address.

Detecting low-quality auto-generated pink slime journalism articles
Addressing adversarial threats from large language models to detection systems
Developing a robust learning framework to resist LLM-based attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linguistic analysis for detecting auto-generated journalism
Framework to resist LLM-based adversarial attacks
Adaptive detection for evolving automated journalism threats
🔎 Similar Papers
No similar papers found.