🤖 AI Summary
The widespread adoption of generative AI—particularly large language models (LLMs)—in news production raises critical concerns regarding factual accuracy, authorial attribution, and stylistic diversity. This study analyzes over 40,000 news articles from mainstream, local, and university-affiliated media outlets, employing a multimodal detection framework integrating Binoculars, Fast-Detect GPT, and GPTZero, complemented by sentence-level classification and linguistic feature analysis. It is the first to systematically identify a structural pattern in LLM usage: high-frequency generation in lead paragraphs and predominantly human-written conclusions. Results indicate that AI assistance significantly enhances readability and lexical diversity but diminishes textual formality and regional stylistic distinctiveness—most markedly among local and university media, which exhibit pronounced homogenization. The study provides empirical grounding and methodological innovation for news ethics governance and the development of responsible AI–human collaborative writing standards.
📝 Abstract
The rapid rise of Generative AI (GenAI), particularly LLMs, poses concerns for journalistic integrity and authorship. This study examines AI-generated content across over 40,000 news articles from major, local, and college news media, in various media formats. Using three advanced AI-text detectors (e.g., Binoculars, Fast-Detect GPT, and GPTZero), we find substantial increase of GenAI use in recent years, especially in local and college news. Sentence-level analysis reveals LLMs are often used in the introduction of news, while conclusions usually written manually. Linguistic analysis shows GenAI boosts word richness and readability but lowers formality, leading to more uniform writing styles, particularly in local media.