Which stylistic features fool ChatGPT research evaluations?

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether linguistic style factors—such as readability, length, and complexity—introduce non-quality-related biases in ChatGPT’s evaluation of research quality. Leveraging abstracts from 99,277 journal articles submitted to the UK’s Research Excellence Framework (REF) 2021, the authors combine established readability metrics, large-scale textual analysis, and automated scoring by ChatGPT to systematically compare its assessments against expert human ratings. The findings reveal, for the first time, that ChatGPT’s scores are positively influenced by abstract length and linguistic complexity, whereas human expert evaluations show no such association. This discrepancy indicates a systematic bias in large language models toward favoring longer and more complex texts, raising critical concerns about their reliability and fairness when deployed in scholarly assessment contexts.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have the potential to be used to support research evaluation and have a moderate capability to estimate the research quality of a journal article from its title and abstract. This paper assesses whether there are language-related factors unrelated to the quality of the research that influence ChatGPT's scores. Using a dataset of 99,277 journal articles submitted to the UK-wide Research Excellence Framework (REF) 2021 assessments, we calculated several readability indicators from abstracts and correlated them with ChatGPT scores and departmental REF scores. From the results, linguistic complexity and length were more strongly associated with ChatGPT research quality scores than with REF expert scores in many subject areas. Although cause-and-effect was not tested, these results suggest that ChatGPT may be more likely than human experts to reward linguistic complexity, with a potential bias towards longer and less readable abstracts in many fields. The apparent preference of LLMs for complex language is an undesirable feature for practical applications of LLMs for research quality evaluation, unless solutions can be found.
Problem

Research questions and friction points this paper is trying to address.

stylistic features
research evaluation
large language models
linguistic complexity
readability
Innovation

Methods, ideas, or system contributions that make the work stand out.

large language models
research evaluation
linguistic bias
readability metrics
ChatGPT
🔎 Similar Papers
No similar papers found.
K
Kayvan Kousha
Statistical Cybermetrics and Research Evaluation Group, Business School, University of Wolverhampton, UK
Mike Thelwall
Mike Thelwall
School of Information, Journalism and Communication, The University of Sheffield
scientometricsaltmetricssentiment analysissocial mediaartificial intelligence