The Dark Side of LLMs Agent-based Attacks for Complete Computer Takeover

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically assesses systemic security risks arising from large language model (LLM) agents as a novel attack vector, particularly their capacity to achieve full compromise of end-user devices. We introduce the first evaluation framework covering three emerging attack surfaces: direct prompt injection, RAG-based backdoor attacks, and cross-agent trust misuse. Using multi-agent interaction simulations, we evaluate 17 mainstream LLMs—including GPT-4o, Claude-4, and Gemini-2.5—under realistic deployment conditions. Results reveal that 82.4% of models are compromised due to trust boundary violations, while only 5.9% exhibit even basic defensive capabilities; most exhibit exploitable, context-dependent security blind spots amenable to chained exploitation. The study uncovers structural fragility in implicit trust mechanisms inherent to multi-agent systems, establishing the first empirical security benchmark and a comprehensive risk taxonomy for LLM agent design.

Technology Category

Application Category

📝 Abstract
The rapid adoption of Large Language Model (LLM) agents and multi-agent systems enables unprecedented capabilities in natural language processing and generation. However, these systems have introduced unprecedented security vulnerabilities that extend beyond traditional prompt injection attacks. This paper presents the first comprehensive evaluation of LLM agents as attack vectors capable of achieving complete computer takeover through the exploitation of trust boundaries within agentic AI systems where autonomous entities interact and influence each other. We demonstrate that adversaries can leverage three distinct attack surfaces - direct prompt injection, RAG backdoor attacks, and inter-agent trust exploitation - to coerce popular LLMs (including GPT-4o, Claude-4 and Gemini-2.5) into autonomously installing and executing malware on victim machines. Our evaluation of 17 state-of-the-art LLMs reveals an alarming vulnerability hierarchy: while 41.2% of models succumb to direct prompt injection, 52.9% are vulnerable to RAG backdoor attacks, and a critical 82.4% can be compromised through inter-agent trust exploitation. Notably, we discovered that LLMs which successfully resist direct malicious commands will execute identical payloads when requested by peer agents, revealing a fundamental flaw in current multi-agent security models. Our findings demonstrate that only 5.9% of tested models (1/17) proved resistant to all attack vectors, with the majority exhibiting context-dependent security behaviors that create exploitable blind spots. Our findings also highlight the need to increase awareness and research on the security risks of LLMs, showing a paradigm shift in cybersecurity threats, where AI tools themselves become sophisticated attack vectors.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM agents as attack vectors for computer takeover
Identifying vulnerabilities in multi-agent AI systems security
Assessing LLM susceptibility to diverse adversarial attack methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploits trust boundaries in multi-agent systems
Leverages three distinct LLM attack surfaces
Reveals vulnerability hierarchy in 17 LLMs
🔎 Similar Papers
No similar papers found.
M
Matteo Lupinacci
University of Calabria
F
Francesco Aurelio Pironti
University of Calabria
F
Francesco Blefari
IMT School for Advanced Studies
F
Francesco Romeo
IMT School for Advanced Studies
L
Luigi Arena
University of Calabria
Angelo Furfaro
Angelo Furfaro
Associate Professor, University of Calabria, Italy
Modelling and SimulationReal-time SystemsCyber Security