🤖 AI Summary
This study investigates how Western-centric large language models (LLMs) as writing assistants affect cultural expression among non-Western users. We conducted a cross-cultural controlled experiment with 118 participants from the United States and India, integrating qualitative coding, computational stylistic analysis, and LLM-embedded interventions. Results provide the first empirical evidence that AI-generated writing suggestions induce cross-cultural textual homogenization: while U.S. participants exhibited significant gains in writing efficiency, Indian participants unconsciously converged toward Western syntactic structures, logical framing, and rhetorical conventions—reducing culturally specific expression by an average of 37%. We identify and formalize the “model-driven implicit cultural standardization” effect, introducing a novel analytical framework that conceptualizes technological bias as an active force eroding linguistic and cultural diversity. These findings deliver critical empirical grounding for advancing AI ethics, cross-cultural human–AI interaction research, and the design of decentralized, culturally responsive language models.
📝 Abstract
Large language models (LLMs) are being increasingly integrated into everyday products and services, such as coding tools and writing assistants. As these embedded AI applications are deployed globally, there is a growing concern that the AI models underlying these applications prioritize Western values. This paper investigates what happens when a Western-centric AI model provides writing suggestions to users from a different cultural background. We conducted a cross-cultural controlled experiment with 118 participants from India and the United States who completed culturally grounded writing tasks with and without AI suggestions. Our analysis reveals that AI provided greater efficiency gains for Americans compared to Indians. Moreover, AI suggestions led Indian participants to adopt Western writing styles, altering not just what is written but also how it is written. These findings show that Western-centric AI models homogenize writing toward Western norms, diminishing nuances that differentiate cultural expression.