🤖 AI Summary
This paper reveals that large language models (LLMs) as writing assistants induce systemic linguistic homogenization: they preserve semantic content while significantly reducing individual stylistic diversity, selectively amplifying dominant stylistic features and societal biases, and suppressing marginalized linguistic expressions. Employing a multimethod empirical approach—including controlled experiments, natural-text observation, quantitative stylistic analysis, bias classifier evaluation, and robustness testing across models, prompts, and scenarios—the study is the first to demonstrate the strong generalizability of this phenomenon. Key contributions are: (1) establishing that LLM-driven linguistic diversity erosion poses profound risks to fairness (e.g., misjudging cultural adaptability in hiring), clinical diagnostics (loss of individuating language cues), and cultural preservation; and (2) providing the first reproducible, multidimensionally validated framework for assessing the sociolinguistic impact of AI-mediated language intervention.
📝 Abstract
Language is far more than a communication tool. A wealth of information - including but not limited to the identities, psychological states, and social contexts of its users - can be gleaned through linguistic markers, and such insights are routinely leveraged across diverse fields ranging from product development and marketing to healthcare. In four studies utilizing experimental and observational methods, we demonstrate that the widespread adoption of large language models (LLMs) as writing assistants is linked to notable declines in linguistic diversity and may interfere with the societal and psychological insights language provides. We show that while the core content of texts is retained when LLMs polish and rewrite texts, not only do they homogenize writing styles, but they also alter stylistic elements in a way that selectively amplifies certain dominant characteristics or biases while suppressing others - emphasizing conformity over individuality. By varying LLMs, prompts, classifiers, and contexts, we show that these trends are robust and consistent. Our findings highlight a wide array of risks associated with linguistic homogenization, including compromised diagnostic processes and personalization efforts, the exacerbation of existing divides and barriers to equity in settings like personnel selection where language plays a critical role in assessing candidates' qualifications, communication skills, and cultural fit, and the undermining of efforts for cultural preservation.