🤖 AI Summary
Predicting the long-term potential of a scholarly paper to become highly cited based solely on its early textual content remains a key challenge in research impact assessment. This work proposes a text-centric framework that leverages only the title, abstract, keywords, and limited metadata, combined with large language models (LLMs) and structured prompt engineering, to systematically evaluate— for the first time—the efficacy of LLMs in highly cited paper prediction. The approach consistently outperforms existing benchmarks across multiple publication years and definitions of high citation, demonstrating strong cross-temporal generalization. Furthermore, it uncovers emerging high-impact topics such as causal inference and deep learning. To facilitate real-world application, the authors have also developed a WeChat mini-program, “Stat Highly Cited Papers,” enabling practical deployment of the proposed methodology.
📝 Abstract
Predicting highly-cited papers is a long-standing challenge due to the complex interactions of research content, scholarly communities, and temporal dynamics. Recent advances in large language models (LLMs) raise the question of whether early-stage textual information can provide useful signals of long-term scientific impact. Focusing on statistical publications, we propose a flexible, text-centered framework that leverages LLMs and structured prompt design to predict highly cited papers. Specifically, we utilize information available at the time of publication, including titles, abstracts, keywords, and limited bibliographic metadata. Using a large corpus of statistical papers, we evaluate predictive performance across multiple publication periods and alternative definitions of highly cited papers. The proposed approach achieves stable and competitive performance relative to existing methods and demonstrates strong generalization over time. Textual analysis further reveals that papers predicted as highly cited concentrate on recurring topics such as causal inference and deep learning. To facilitate practical use of the proposed approach, we further develop a WeChat mini program, \textit{Stat Highly Cited Papers}, which provides an accessible interface for early-stage citation impact assessment. Overall, our results provide empirical evidence that LLMs can capture meaningful early signals of long-term citation impact, while also highlighting their limitations as tools for research impact assessment.