How LLMs Distort Our Written Language

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) employed as writing assistants may systematically distort the semantic content, stance, and creativity of human expression, thereby threatening the authenticity of academic and cultural discourse. Leveraging real user interaction data alongside historical text comparison experiments, the research integrates user studies, textual revision analysis, semantic evaluation, and AI-generated content detection. Findings reveal that even minimal grammatical edits by LLMs significantly alter original meaning, drive texts toward semantic neutrality, and reduce creative expression. In the context of peer review, LLM-assisted evaluations are consistently more lenient and overlook critical dimensions such as clarity and scholarly significance. These results provide quantitative evidence of a semantic drift effect induced by LLM-mediated writing assistance.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are used by over a billion people globally, most often to assist with writing. In this work, we demonstrate that LLMs not only alter the voice and tone of human writing, but also consistently alter the intended meaning. First, we conduct a human user study to understand how people actually interact with LLMs when using them for writing. Our findings reveal that extensive LLM use led to a nearly 70% increase in essays that remained neutral in answering the topic question. Significantly more heavy LLM users reported that the writing was less creative and not in their voice. Next, using a dataset of human-written essays that was collected in 2021 before the widespread release of LLMs, we study how asking an LLM to revise the essay based on the human-written feedback in the dataset induces large changes in the resulting content and meaning. We find that even when LLMs are prompted with expert feedback and asked to only make grammar edits, they still change the text in a way that significantly alters its semantic meaning. We then examine LLM-generated text in the wild, specifically focusing on the 21% of AI-generated scientific peer reviews at a recent top AI conference. We find that LLM-generated reviews place significantly less weight on clarity and significance of the research, and assign scores that, on average, are a full point higher.These findings highlight a misalignment between the perceived benefit of AI use and an implicit, consistent effect on the semantics of human writing, motivating future work on how widespread AI writing will affect our cultural and scientific institutions.
Problem

Research questions and friction points this paper is trying to address.

large language models
semantic distortion
AI-assisted writing
human writing
meaning alteration
Innovation

Methods, ideas, or system contributions that make the work stand out.

semantic distortion
large language models
writing assistance
human-AI interaction
peer review
🔎 Similar Papers
No similar papers found.