🤖 AI Summary
This study investigates whether large language models (LLMs), such as ChatGPT, reshape human spoken language and cultural practices through human–AI linguistic feedback loops. Method: Leveraging ASR-transcribed speech from 280,000 university-level YouTube lecture videos, we conduct time-series word frequency analysis and employ a pre–post ChatGPT release quasi-experimental design. Contribution/Results: We present the first empirical evidence that LLMs directly influence authentic human spoken behavior: post-release, ChatGPT-characteristic lexical items exhibit statistically significant increases in academic speech (p < 0.001), confirming systematic oral imitation by humans. Moving beyond prior written-language–focused work, this study reveals the mechanism of AI-generated language diffusion into spoken discourse. It further highlights critical sociocultural risks—including erosion of linguistic diversity, discursive manipulation, and asymmetric human–AI co-evolution—thereby advancing foundational understanding of LLMs’ real-world linguistic impact.
📝 Abstract
Artificial Intelligence (AI) agents now interact with billions of humans in natural language, thanks to advances in Large Language Models (LLMs) like ChatGPT. This raises the question of whether AI has the potential to shape a fundamental aspect of human culture: the way we speak. Recent analyses revealed that scientific publications already exhibit evidence of AI-specific language. But this evidence is inconclusive, since scientists may simply be using AI to copy-edit their writing. To explore whether AI has influenced human spoken communication, we transcribed and analyzed about 280,000 English-language videos of presentations, talks, and speeches from more than 20,000 YouTube channels of academic institutions. We find a significant shift in the trend of word usage specific to words distinctively associated with ChatGPT following its release. These findings provide the first empirical evidence that humans increasingly imitate LLMs in their spoken language. Our results raise societal and policy-relevant concerns about the potential of AI to unintentionally reduce linguistic diversity, or to be deliberately misused for mass manipulation. They also highlight the need for further investigation into the feedback loops between machine behavior and human culture.