🤖 AI Summary
Addressing the growing demand for intelligent transformation in Science of Science (SciSci), this work investigates the systematic application of large language models (LLMs) to scientometrics. Method: We propose an AI agent framework integrating prompt engineering, retrieval-augmented generation (RAG), knowledge enhancement, fine-tuning, and tool learning to unify research evaluation, emerging trend identification, and automated knowledge graph construction. Contribution/Results: Our approach innovatively enables: (1) fine-grained assessment of scholarly impact leveraging heterogeneous, multi-source literature; (2) domain-agnostic, adaptive detection of research frontiers without predefined categories; and (3) evolvable, dynamic knowledge graph construction. Experimental results demonstrate substantial improvements over conventional bibliometric methods in scientific structure modeling and research insight discovery. The framework delivers a scalable, interpretable, and LLM-powered paradigm for SciSci, advancing both methodological rigor and practical applicability in computational science studies.
📝 Abstract
Large language models (LLMs) have exhibited exceptional capabilities in natural language understanding and generation, image recognition, and multimodal tasks, charting a course towards AGI and emerging as a central issue in the global technological race. This manuscript conducts a comprehensive review of the core technologies that support LLMs from a user standpoint, including prompt engineering, knowledge-enhanced retrieval augmented generation, fine tuning, pretraining, and tool learning. Additionally, it traces the historical development of Science of Science (SciSci) and presents a forward looking perspective on the potential applications of LLMs within the scientometric domain. Furthermore, it discusses the prospect of an AI agent based model for scientific evaluation, and presents new research fronts detection and knowledge graph building methods with LLMs.