AI as We Describe It: How Large Language Models and Their Applications in Health are Represented Across Channels of Public Discourse

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether public discourse on large language models (LLMs) in health contexts is balanced and how it shapes societal expectations in high-stakes applications. Employing diachronic content analysis and semiotic methods across five platforms—news media, academic science communication, YouTube, TikTok, and Reddit—it quantifies differences in semantic framing, information density, and anthropomorphism. Results reveal systematic cross-platform disparities: social media emphasizes well-being narratives while underrepresenting risks; overall discourse is predominantly positive yet fragmented, with insufficient explanation of LLM generative mechanisms. These findings expose dual deficits in public digital health literacy and regulatory responsiveness. The study provides empirical grounding for ethically informed AI health communication and platform-specific governance frameworks.

Technology Category

Application Category

📝 Abstract
Representation shapes public attitudes and behaviors. With the arrival and rapid adoption of LLMs, the way these systems are introduced will negotiate societal expectations for their role in high-stakes domains like health. Yet it remains unclear whether current narratives present a balanced view. We analyzed five prominent discourse channels (news, research press, YouTube, TikTok, and Reddit) over a two-year period on lexical style, informational content, and symbolic representation. Discussions were generally positive and episodic, with positivity increasing over time. Risk communication was unthorough and often reduced to information quality incidents, while explanations of LLMs'generative nature were rare. Compared with professional outlets, TikTok and Reddit highlighted wellbeing applications and showed greater variations in tone and anthropomorphism but little attention to risks. We discuss implications for public discourse as a diagnostic tool in identifying literacy and governance gaps, and for communication and design strategies to support more informed LLM engagement.
Problem

Research questions and friction points this paper is trying to address.

Analyzing public discourse representation of LLMs in health applications
Assessing balance in risk communication and generative nature explanations
Identifying literacy and governance gaps across different media channels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzed five public discourse channels
Examined lexical style and informational content
Identified gaps in risk communication literacy
🔎 Similar Papers
No similar papers found.
J
Jiawei Zhou
Georgia Institute of Technology, USA
L
Lei Zhang
Georgia Institute of Technology, USA
M
Mei Li
Georgia Institute of Technology, USA
Benjamin D. Horne
Benjamin D. Horne
University of Tennessee Knoxville, USA
Munmun De Choudhury
Munmun De Choudhury
Georgia Institute of Technology
Computational Social ScienceSocial ComputingMental HealthLanguage