Computational Approaches to Understanding Large Language Model Impact on Writing and Information Ecosystems

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically examines the tripartite impact of large language models (LLMs) on writing and information ecosystems: (1) systemic misclassification and fairness risks of AI detectors across non-dominant linguistic variants; (2) adoption dynamics of LLM-generated content across diverse domains; and (3) LLMs’ viability as inclusive, low-barrier tools for scholarly feedback. Methodologically, we integrate bias auditing, cross-domain temporal pattern mining, large-scale text provenance analysis, and empirical A/B evaluation. We first document significant discrimination by AI detectors against non-native academic writing. Second, we introduce the first population-scale, multi-scenario quantitative framework for measuring LLM adoption. Third, empirical results demonstrate that LLM-provided feedback effectively mitigates peer-review gaps for early-career researchers and institutions in resource-constrained regions. Content penetration exhibits sustained growth across six domains—including academic review and corporate communication—validating LLMs’ expanding role in knowledge production and dissemination.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown significant potential to change how we write, communicate, and create, leading to rapid adoption across society. This dissertation examines how individuals and institutions are adapting to and engaging with this emerging technology through three research directions. First, I demonstrate how the institutional adoption of AI detectors introduces systematic biases, particularly disadvantaging writers of non-dominant language varieties, highlighting critical equity concerns in AI governance. Second, I present novel population-level algorithmic approaches that measure the increasing adoption of LLMs across writing domains, revealing consistent patterns of AI-assisted content in academic peer reviews, scientific publications, consumer complaints, corporate communications, job postings, and international organization press releases. Finally, I investigate LLMs' capability to provide feedback on research manuscripts through a large-scale empirical analysis, offering insights into their potential to support researchers who face barriers in accessing timely manuscript feedback, particularly early-career researchers and those from under-resourced settings.
Problem

Research questions and friction points this paper is trying to address.

AI detectors bias non-dominant language writers, raising equity concerns
Measuring LLM adoption in diverse writing domains reveals AI-assisted content patterns
Assessing LLMs' potential to provide manuscript feedback for under-resourced researchers
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI detectors reveal biases in non-dominant language varieties
Algorithmic tracking of LLM adoption across writing domains
LLMs provide manuscript feedback for under-resourced researchers
🔎 Similar Papers
No similar papers found.