"Give a Positive Review Only": An Early Investigation Into In-Paper Prompt Injection Attacks and Defenses for AI Reviewers

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically identifies, for the first time, prompt injection attacks embedded within manuscript text targeting AI-assisted peer review systems: malicious authors stealthily encode adversarial prompts into submissions to manipulate AI reviewers into generating inflated, unjustified scores. We propose two attack paradigms—static prompt injection and iterative optimization guided by surrogate reviewer models—and empirically demonstrate 100% success rates across multiple state-of-the-art AI reviewer models. To counter this threat, we design a detection-based defense mechanism that reduces attack success to below 20%; however, adaptive variants remain capable of evasion, underscoring the inherent difficulty of robust defense. Our contributions include (1) establishing prompt injection as a novel research direction in AI peer review security; (2) releasing the first reproducible benchmark for attack and defense evaluation; (3) introducing a robust, iterative attack framework; and (4) proposing an initial defense paradigm—laying critical groundwork for trustworthy AI-assisted scholarly review.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of AI models, their deployment across diverse tasks has become increasingly widespread. A notable emerging application is leveraging AI models to assist in reviewing scientific papers. However, recent reports have revealed that some papers contain hidden, injected prompts designed to manipulate AI reviewers into providing overly favorable evaluations. In this work, we present an early systematic investigation into this emerging threat. We propose two classes of attacks: (1) static attack, which employs a fixed injection prompt, and (2) iterative attack, which optimizes the injection prompt against a simulated reviewer model to maximize its effectiveness. Both attacks achieve striking performance, frequently inducing full evaluation scores when targeting frontier AI reviewers. Furthermore, we show that these attacks are robust across various settings. To counter this threat, we explore a simple detection-based defense. While it substantially reduces the attack success rate, we demonstrate that an adaptive attacker can partially circumvent this defense. Our findings underscore the need for greater attention and rigorous safeguards against prompt-injection threats in AI-assisted peer review.
Problem

Research questions and friction points this paper is trying to address.

Investigating hidden prompt injection attacks on AI paper reviewers
Proposing static and iterative attacks to manipulate review scores
Exploring detection defenses against adaptive prompt injection threats
Innovation

Methods, ideas, or system contributions that make the work stand out.

Static and iterative prompt injection attacks
Detection-based defense against injected prompts
Simulated reviewer model for attack optimization
🔎 Similar Papers
No similar papers found.
Qin Zhou
Qin Zhou
East China University of Science and Technology
computer visionmedical image analysisfederated learningmulti-modal learning
Zhexin Zhang
Zhexin Zhang
Tsinghua University, CoAI Group
NLPAI Safety & Alignment
Z
Zhi Li
Institute of Information Engineering, CAS, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
L
Limin Sun
Institute of Information Engineering, CAS, Beijing, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China