🤖 AI Summary
Academic meta-reviewing—synthesizing multiple reviewer reports into a coherent, comprehensive recommendation—is prone to human fatigue and subjectivity. To address this, we propose a large language model (LLM)-assisted Multi-Perspective Summarization (MPS) method. This work presents the first systematic evaluation of GPT-3.5, LLaMA2, and PaLM2 for controllable summarization in meta-reviewing, grounded in the TELeR prompting taxonomy. Through hierarchical prompt engineering, we explicitly guide summary accuracy, coverage, and structural coherence. Empirical results demonstrate that optimized prompt design significantly enhances MPS quality across all dimensions. Our study establishes a reproducible methodological framework for AI-augmented academic peer review and uncovers systematic relationships between prompt granularity and meta-review assistance efficacy.
📝 Abstract
One of the most important yet onerous tasks in the academic peer-reviewing process is composing meta-reviews, which involves assimilating diverse opinions from multiple expert peers, formulating one's self-judgment as a senior expert, and then summarizing all these perspectives into a concise holistic overview to make an overall recommendation. This process is time-consuming and can be compromised by human factors like fatigue, inconsistency, missing tiny details, etc. Given the latest major developments in Large Language Models (LLMs), it is very compelling to rigorously study whether LLMs can help metareviewers perform this important task better. In this paper, we perform a case study with three popular LLMs, i.e., GPT-3.5, LLaMA2, and PaLM2, to assist meta-reviewers in better comprehending multiple experts perspectives by generating a controlled multi-perspective summary (MPS) of their opinions. To achieve this, we prompt three LLMs with different types/levels of prompts based on the recently proposed TELeR taxonomy. Finally, we perform a detailed qualitative study of the MPSs generated by the LLMs and report our findings.