David vs. Goliath: Verifiable Agent-to-Agent Jailbreaking via Reinforcement Learning

๐Ÿ“… 2026-02-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the security threat posed by tool-augmented large language model (LLM) agents being induced by unprivileged, tool-less adversaries to perform prohibited actions within their legitimate permissions. To this end, we propose the Tag-Along attack model and develop the Slingshot reinforcement learning framework, which autonomously discovers short, instruction-based jailbreaking strategies through environmental interaction. We formulate inter-agent jailbreaking as a verifiable control problemโ€”a first in the fieldโ€”and demonstrate that successful attacks converge on concise instructions rather than multi-turn persuasion. Our approach enables zero-shot transfer across models. Experiments show a 67.0% attack success rate on Qwen2.5-32B (versus 1.7% for the baseline), reducing the average number of attempts until first success from 52.3 to 1.3. Zero-shot transfer achieves success rates of 39.2%โ€“56.0% on models including Gemini 2.5 Flash and Meta-SecAlign-8B.

Technology Category

Application Category

๐Ÿ“ Abstract
The evolution of large language models into autonomous agents introduces adversarial failures that exploit legitimate tool privileges, transforming safety evaluation in tool-augmented environments from a subjective NLP task into an objective control problem. We formalize this threat model as Tag-Along Attacks: a scenario where a tool-less adversary"tags along"on the trusted privileges of a safety-aligned Operator to induce prohibited tool use through conversation alone. To validate this threat, we present Slingshot, a'cold-start'reinforcement learning framework that autonomously discovers emergent attack vectors, revealing a critical insight: in our setting, learned attacks tend to converge to short, instruction-like syntactic patterns rather than multi-turn persuasion. On held-out extreme-difficulty tasks, Slingshot achieves a 67.0% success rate against a Qwen2.5-32B-Instruct-AWQ Operator (vs. 1.7% baseline), reducing the expected attempts to first success (on solved tasks) from 52.3 to 1.3. Crucially, Slingshot transfers zero-shot to several model families, including closed-source models like Gemini 2.5 Flash (56.0% attack success rate) and defensive-fine-tuned open-source models like Meta-SecAlign-8B (39.2% attack success rate). Our work establishes Tag-Along Attacks as a first-class, verifiable threat model and shows that effective agentic attacks can be elicited from off-the-shelf open-weight models through environment interaction alone.
Problem

Research questions and friction points this paper is trying to address.

Tag-Along Attacks
agent-to-agent jailbreaking
tool-augmented safety
adversarial failures
verifiable threat model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tag-Along Attacks
Reinforcement Learning
Agent Jailbreaking
Tool-Augmented LLMs
Zero-Shot Transfer
๐Ÿ”Ž Similar Papers
No similar papers found.
S
Samuel Nellessen
Department of Artificial Intelligence, Radboud University, Nijmegen, The Netherlands
Tal Kachman
Tal Kachman
Radboud University
Machine LearningDeep LearningGame TheoryComplexity TheoryQuantum machine learning