🤖 AI Summary
Low participation in peer review often leads to delays and inconsistent quality in research funding decisions. This study presents the first systematic evaluation of medium-scale open-source large language models (LLMs), such as Gemma 3 27B, for scoring postdoctoral grant applications. Using proposal titles and abstracts from the Swedish Medical Research Council’s 1994 dataset, the LLM generated scores that were compared against expert reviewers’ assessments via Spearman’s rank correlation coefficient. The results show a weak but statistically significant positive correlation between LLM-generated and human-assigned scores (mean ρ = 0.22, peaking at 0.33), amounting to approximately 56% of the inter-reviewer correlation. These findings suggest that LLMs can provide meaningful support in preliminary screening or tie-breaking scenarios within peer review processes.
📝 Abstract
Purpose: Despite the importance of peer review for grant funding decisions, academics are often reluctant to conduct it. This can lead to long delays between submission and the final decision as well as the risk of substandard reviews from busy or non-specialist scholars. At least one funder now uses Large Language Models (LLMs) to reduce the reviewing burden but the accuracy of LLMs for scoring grant proposals needs to be assessed. Design/methodology/approach: This article compares scores from a range of medium sized open weights LLMs with peer review scores for a well-researched dataset, the Swedish Medical Council's post-doctoral fellowship applications from 1994. Findings: Whilst the LLM scores correlate moderately between each other (mean Spearman correlation: 0.34), they correlated weakly but positively and mostly statistically significantly with the average expert scores (mean Spearman correlation: 0.22). The highest rank correlation between expert scores and LLMs was 0.33 for Gemma 3 27b based on proposal titles and summaries without their main texts, which is about half (56%) of the correlation between reviewers. Research limitations: The small sample size, old funding call and heterogeneous evaluation criteria all undermine the robustness of the analysis. Practical implications: Despite the ability of LLMs to score grant proposals being quantitatively weaker than that of experts, at least in this special case, they may have role in application triage or tie-breaking. Originality/value: This is the first assessment of the value of LLM scores for funding proposals.