🤖 AI Summary
This work systematically identifies, for the first time, prompt injection attacks embedded within manuscript text targeting AI-assisted peer review systems: malicious authors stealthily encode adversarial prompts into submissions to manipulate AI reviewers into generating inflated, unjustified scores. We propose two attack paradigms—static prompt injection and iterative optimization guided by surrogate reviewer models—and empirically demonstrate 100% success rates across multiple state-of-the-art AI reviewer models. To counter this threat, we design a detection-based defense mechanism that reduces attack success to below 20%; however, adaptive variants remain capable of evasion, underscoring the inherent difficulty of robust defense. Our contributions include (1) establishing prompt injection as a novel research direction in AI peer review security; (2) releasing the first reproducible benchmark for attack and defense evaluation; (3) introducing a robust, iterative attack framework; and (4) proposing an initial defense paradigm—laying critical groundwork for trustworthy AI-assisted scholarly review.
📝 Abstract
With the rapid advancement of AI models, their deployment across diverse tasks has become increasingly widespread. A notable emerging application is leveraging AI models to assist in reviewing scientific papers. However, recent reports have revealed that some papers contain hidden, injected prompts designed to manipulate AI reviewers into providing overly favorable evaluations. In this work, we present an early systematic investigation into this emerging threat. We propose two classes of attacks: (1) static attack, which employs a fixed injection prompt, and (2) iterative attack, which optimizes the injection prompt against a simulated reviewer model to maximize its effectiveness. Both attacks achieve striking performance, frequently inducing full evaluation scores when targeting frontier AI reviewers. Furthermore, we show that these attacks are robust across various settings. To counter this threat, we explore a simple detection-based defense. While it substantially reduces the attack success rate, we demonstrate that an adaptive attacker can partially circumvent this defense. Our findings underscore the need for greater attention and rigorous safeguards against prompt-injection threats in AI-assisted peer review.