🤖 AI Summary
This study addresses the limitations of traditional journal impact metrics—such as the Journal Impact Factor—in cross-disciplinary evaluation, where issues of insufficient coverage, methodological fragility, and disciplinary bias persist. Drawing on a large-scale dataset encompassing 17,816 journals from the Scilit database, the authors propose and evaluate the Integrated Impact Indicator (I3) and its normalized variant, I3/N. Through descriptive statistics and multidimensional comparative analyses, the research systematically demonstrates that I3/N exhibits superior breadth of coverage, methodological robustness, and disciplinary fairness. Empirical results for the 2023–2024 evaluation period indicate that I3/N outperforms both the Journal Impact Factor and CiteScore, offering a more accurate, diagnostic, and responsible paradigm for scholarly assessment.
📝 Abstract
In this study, we systematically elucidate the background and functionality of the Scilit database and evaluate the feasibility and advantages of the comprehensive impact metrics I3 and I3/N, introduced within the Scilit framework. Using a matched dataset of 17,816 journals, we conduct a comparative analysis of Scilit I3/N, Journal Impact Factor, and CiteScore for 2023 and 2024, covering descriptive statistics and distributional characteristics from both disciplinary and publisher perspectives. The comparison reveals that the Scilit I3 and I3/N framework significantly outperforms traditional mean-based metrics in terms of coverage, methodological robustness, and disciplinary fairness. It provides a more accurate, diagnosable, and responsible solution for interdisciplinary journal impact assessment. Our research serves as a"getting started guide"for Scilit, offering scholars, librarians, and academic publishers in the fields of bibliometrics or scientometrics a valuable perspective for exploring I3 and I3/N within an inclusive database. This enables a more accurate and comprehensive understanding of disciplinary development and scientific progress. We advocate for piloting and validating this method in broader evaluation contexts to foster a more precise and diverse representation of scientific progress.