๐ค AI Summary
This work addresses the security threat posed by tool-augmented large language model (LLM) agents being induced by unprivileged, tool-less adversaries to perform prohibited actions within their legitimate permissions. To this end, we propose the Tag-Along attack model and develop the Slingshot reinforcement learning framework, which autonomously discovers short, instruction-based jailbreaking strategies through environmental interaction. We formulate inter-agent jailbreaking as a verifiable control problemโa first in the fieldโand demonstrate that successful attacks converge on concise instructions rather than multi-turn persuasion. Our approach enables zero-shot transfer across models. Experiments show a 67.0% attack success rate on Qwen2.5-32B (versus 1.7% for the baseline), reducing the average number of attempts until first success from 52.3 to 1.3. Zero-shot transfer achieves success rates of 39.2%โ56.0% on models including Gemini 2.5 Flash and Meta-SecAlign-8B.
๐ Abstract
The evolution of large language models into autonomous agents introduces adversarial failures that exploit legitimate tool privileges, transforming safety evaluation in tool-augmented environments from a subjective NLP task into an objective control problem. We formalize this threat model as Tag-Along Attacks: a scenario where a tool-less adversary"tags along"on the trusted privileges of a safety-aligned Operator to induce prohibited tool use through conversation alone. To validate this threat, we present Slingshot, a'cold-start'reinforcement learning framework that autonomously discovers emergent attack vectors, revealing a critical insight: in our setting, learned attacks tend to converge to short, instruction-like syntactic patterns rather than multi-turn persuasion. On held-out extreme-difficulty tasks, Slingshot achieves a 67.0% success rate against a Qwen2.5-32B-Instruct-AWQ Operator (vs. 1.7% baseline), reducing the expected attempts to first success (on solved tasks) from 52.3 to 1.3. Crucially, Slingshot transfers zero-shot to several model families, including closed-source models like Gemini 2.5 Flash (56.0% attack success rate) and defensive-fine-tuned open-source models like Meta-SecAlign-8B (39.2% attack success rate). Our work establishes Tag-Along Attacks as a first-class, verifiable threat model and shows that effective agentic attacks can be elicited from off-the-shelf open-weight models through environment interaction alone.