Breaking Agent Backbones: Evaluating the Security of Backbone LLMs in AI Agents

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM safety evaluation frameworks lack systematic characterization of security propagation risks arising when LLMs serve as the backbone of AI agents. To address this, we propose the “Threat Snapshot” method, which isolates critical execution states wherein LLM vulnerabilities manifest during agent operation, enabling precise identification and categorization of security risks. Based on this methodology, we introduce b³—the first agent-centric safety benchmark—comprising 194,331 crowdsourced adversarial examples, and empirically evaluate 31 mainstream LLMs. Our findings reveal that enhanced reasoning capabilities correlate with improved safety, whereas model scale exhibits no statistically significant relationship with security performance. We publicly release the dataset, evaluation code, and benchmark infrastructure to establish a reproducible, quantifiable, and scalable assessment paradigm for LLM safety design.

Technology Category

Application Category

📝 Abstract
AI agents powered by large language models (LLMs) are being deployed at scale, yet we lack a systematic understanding of how the choice of backbone LLM affects agent security. The non-deterministic sequential nature of AI agents complicates security modeling, while the integration of traditional software with AI components entangles novel LLM vulnerabilities with conventional security risks. Existing frameworks only partially address these challenges as they either capture specific vulnerabilities only or require modeling of complete agents. To address these limitations, we introduce threat snapshots: a framework that isolates specific states in an agent's execution flow where LLM vulnerabilities manifest, enabling the systematic identification and categorization of security risks that propagate from the LLM to the agent level. We apply this framework to construct the $operatorname{b}^3$ benchmark, a security benchmark based on 194331 unique crowdsourced adversarial attacks. We then evaluate 31 popular LLMs with it, revealing, among other insights, that enhanced reasoning capabilities improve security, while model size does not correlate with security. We release our benchmark, dataset, and evaluation code to facilitate widespread adoption by LLM providers and practitioners, offering guidance for agent developers and incentivizing model developers to prioritize backbone security improvements.
Problem

Research questions and friction points this paper is trying to address.

Evaluating how backbone LLM choice impacts AI agent security risks
Isolating LLM vulnerability states in agent execution flow systematically
Benchmarking security propagation from backbone LLMs to agent level
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework isolates LLM vulnerability states in agents
Benchmark uses crowdsourced attacks for security evaluation
Evaluated reasoning capabilities improve security not model size
🔎 Similar Papers
No similar papers found.
J
Julia Bazinska
Lakera AI
M
Max Mathys
Lakera AI
F
Francesco Casucci
Lakera AI, ETH Zürich
Mateo Rojas-Carulla
Mateo Rojas-Carulla
Lakera AI
Machine LearningArtificial IntelligenceCausal InferenceStatistics
Xander Davies
Xander Davies
UK AI Security Institute
A
Alexandra Souly
UK AI Security Institute
Niklas Pfister
Niklas Pfister
Associate Professor, University of Copenhagen