Research Borderlands: Analysing Writing Across Research Cultures

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the adaptability of large language models (LLMs) to interdisciplinary academic writing cultures, focusing on linguistic-cultural disparities across structural organization, stylistic conventions, rhetorical strategies, and citation norms. Methodologically, it adopts a human-centered paradigm: semi-structured interviews with domain experts elicit culturally grounded norms, which are formalized into computable linguistic metrics; these metrics are then applied to comparative analyses of tens of thousands of scholarly publications and LLM-generated texts. Results reveal, for the first time, an implicit cultural homogenization bias in LLMs: their outputs exhibit significantly lower cross-cultural adaptability than human-authored texts and manifest stylistic flattening—reduced variation in voice, register, and disciplinary nuance. The work establishes a theoretical framework and empirical foundation for developing culture-aware academic AI systems, advancing responsible deployment of LLMs in scholarly communication.

Technology Category

Application Category

📝 Abstract
Improving cultural competence of language technologies is important. However most recent works rarely engage with the communities they study, and instead rely on synthetic setups and imperfect proxies of culture. In this work, we take a human-centered approach to discover and measure language-based cultural norms, and cultural competence of LLMs. We focus on a single kind of culture, research cultures, and a single task, adapting writing across research cultures. Through a set of interviews with interdisciplinary researchers, who are experts at moving between cultures, we create a framework of structural, stylistic, rhetorical, and citational norms that vary across research cultures. We operationalise these features with a suite of computational metrics and use them for (a) surfacing latent cultural norms in human-written research papers at scale; and (b) highlighting the lack of cultural competence of LLMs, and their tendency to homogenise writing. Overall, our work illustrates the efficacy of a human-centered approach to measuring cultural norms in human-written and LLM-generated texts.
Problem

Research questions and friction points this paper is trying to address.

Improving cultural competence in language technologies
Measuring language-based cultural norms in research writing
Assessing LLMs' tendency to homogenize cultural writing styles
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-centered approach to cultural norms
Computational metrics for cultural features
Highlighting LLMs' cultural competence lack
🔎 Similar Papers
No similar papers found.