🤖 AI Summary
This study addresses the inefficiencies of traditional peer review and the lack of cross-national empirical evidence on the impact of AI-enhanced review systems on scientific productivity. Leveraging panel data from OECD countries, it constructs the first internationally comparable AI Review Capability (AIRC) index and employs fixed-effects and structural equation models to systematically assess how AI-augmented peer review influences research productivity, reproducibility, and innovative output. The findings reveal that a one-standard-deviation increase in the AIRC index is associated with a statistically significant 18–25% rise in research productivity and a notable reduction in the variability of research quality. This work provides the first cross-national empirical validation of AI as a structural driver in knowledge production.
📝 Abstract
This study empirically investigates the impact of AI-augmented peer review systems on scientific productivity using panel data from OECD countries. While prior research has highlighted inefficiencies in traditional peer review, little empirical work has quantified the systemic impact of AI integration at the national level. We construct a novel AI Review Capability Index (AIRC) and examine its effects on research productivity, reproducibility, and innovation output. Using fixed-effects regression and structural equation modeling (SEM), we show that AI-assisted evaluation significantly enhances productivity and reduces variance in research quality. Results indicate that a one standard deviation increase in AIRC is associated with an 18-25% increase in scientific productivity, mediated through improvements in review efficiency and reproducibility. This paper provides the first cross-country empirical validation of AI-augmented scientific evaluation systems and contributes to the emerging literature on AI as a structural driver of knowledge production.