🤖 AI Summary
This study systematically investigates how large language models (LLMs) have reshaped the computer science academic ecosystem from 2019 to 2024. Addressing four core questions—shifts in top-tier conference research themes, evolution of emerging subfields, disparities in contributions between academia and industry, and national-level developmental trajectories—the work introduces the first multidimensional analytical framework spanning 77 premier conferences and 16,193 papers. It integrates bibliometrics, LDA topic modeling, metadata mining, and time-series analysis. The study quantifies dynamic LLM-topic penetration rates across conferences, uncovers a structural bifurcation between foundational theory and applied deployment, and distills ten original, empirically grounded insights into the AI research ecosystem. These findings provide rigorous evidence to inform AI research policy and disciplinary evolution.
📝 Abstract
Large Language Models (LLMs) are reshaping the landscape of computer science research, driving significant shifts in research priorities across diverse conferences and fields. This study provides a comprehensive analysis of the publication trend of LLM-related papers in 77 top-tier computer science conferences over the past six years (2019-2024). We approach this analysis from four distinct perspectives: (1) We investigate how LLM research is driving topic shifts within major conferences. (2) We adopt a topic modeling approach to identify various areas of LLM-related topic growth and reveal the topics of concern at different conferences. (3) We explore distinct contribution patterns of academic and industrial institutions. (4) We study the influence of national origins on LLM development trajectories. Synthesizing the findings from these diverse analytical angles, we derive ten key insights that illuminate the dynamics and evolution of the LLM research ecosystem.