Paper2Rebuttal: A Multi-Agent Framework for Transparent Author Response Assistance

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing rebuttal generation methods, which often suffer from hallucination, neglect of reviewer concerns, or insufficient verifiable grounding. To overcome these issues, the authors propose RebuttalAgent—the first multi-agent framework specifically designed for rebuttal generation—reformulating the task as an evidence-driven planning process. The approach decomposes reviewer comments into atomic claims, constructs a hybrid context by integrating compressed summaries with high-fidelity source text, and incorporates on-demand external literature retrieval. It first generates an auditable response plan before drafting the final reply. Evaluated on the newly introduced RebuttalBench benchmark, RebuttalAgent significantly outperforms strong baselines in coverage, faithfulness, and strategic coherence, offering a traceable, verifiable, and controllable assistant for author responses.

Technology Category

Application Category

📝 Abstract
Writing effective rebuttals is a high-stakes task that demands more than linguistic fluency, as it requires precise alignment between reviewer intent and manuscript details. Current solutions typically treat this as a direct-to-text generation problem, suffering from hallucination, overlooked critiques, and a lack of verifiable grounding. To address these limitations, we introduce $\textbf{RebuttalAgent}$, the first multi-agents framework that reframes rebuttal generation as an evidence-centric planning task. Our system decomposes complex feedback into atomic concerns and dynamically constructs hybrid contexts by synthesizing compressed summaries with high-fidelity text while integrating an autonomous and on-demand external search module to resolve concerns requiring outside literature. By generating an inspectable response plan before drafting, $\textbf{RebuttalAgent}$ ensures that every argument is explicitly anchored in internal or external evidence. We validate our approach on the proposed $\textbf{RebuttalBench}$ and demonstrate that our pipeline outperforms strong baselines in coverage, faithfulness, and strategic coherence, offering a transparent and controllable assistant for the peer review process. Code will be released.
Problem

Research questions and friction points this paper is trying to address.

rebuttal generation
hallucination
reviewer feedback
evidence grounding
peer review
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-agent framework
evidence-centric planning
rebuttal generation
external search module
response plan
Qianli Ma
Qianli Ma
Shanghai Jiao Tong University
Deep LearningGenerative AILLMsMLLMs
C
Chang Guo
AutoLab, School of Artificial Intelligence, Shanghai Jiao Tong University
Z
Zhiheng Tian
AutoLab, School of Artificial Intelligence, Shanghai Jiao Tong University
S
Siyu Wang
AutoLab, School of Artificial Intelligence, Shanghai Jiao Tong University
J
Jipeng Xiao
AutoLab, School of Artificial Intelligence, Shanghai Jiao Tong University
Yuanhao Yue
Yuanhao Yue
Fudan University
LLMNLPInstruction TuningData SynthesisFactuality
Zhipeng Zhang
Zhipeng Zhang
School of Artificial Intelligence, Shanghai Jiao Tong University
Computer Vision,Object Tracking and Segmentation