🤖 AI Summary
This study addresses the emergent phenomenon of AI hallucinations—high-fidelity, unintentional false outputs generated by generative AI—challenging traditional misinformation theories centered on human agency. Methodologically, it develops the first communication-theoretic framework for AI hallucinations, defining them as “unintentional, high-credibility information distortions,” and integrates a supply-demand model with distributed agency theory to analyze their distinct generative mechanisms, perceptual logics, and institutional responses compared to human-generated misinformation. Drawing on science communication, social constructionism, and media ecology, the study establishes a cross-level analytical pathway (macro–meso–micro) that affirms AI hallucinations as an autonomous communicative phenomenon. Its contributions include: (1) establishing a foundational conceptual framework; (2) delineating three core theoretical differentiators from conventional misinformation; and (3) proposing a three-tier research agenda to reconfigure the theoretical boundaries of misinformation and inform knowledge governance in the AI era.
📝 Abstract
This paper proposes a conceptual framework for understanding AI hallucinations as a distinct form of misinformation. While misinformation scholarship has traditionally focused on human intent, generative AI systems now produce false yet plausible outputs absent of such intent. I argue that these AI hallucinations should not be treated merely as technical failures but as communication phenomena with social consequences. Drawing on a supply-and-demand model and the concept of distributed agency, the framework outlines how hallucinations differ from human-generated misinformation in production, perception, and institutional response. I conclude by outlining a research agenda for communication scholars to investigate the emergence, dissemination, and audience reception of hallucinated content, with attention to macro (institutional), meso (group), and micro (individual) levels. This work urges communication researchers to rethink the boundaries of misinformation theory in light of probabilistic, non-human actors increasingly embedded in knowledge production.