From Secure Agentic AI to Secure Agentic Web: Challenges, Threats, and Future Directions

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses novel security threats faced by large language models operating as autonomous agents in open web environments, where traditional defenses fall short against risks arising from tool invocation, persistent memory, and interaction with untrusted content. The paper introduces the first threat taxonomy tailored to agent-based web systems, systematically delineating attack vectors including prompt abuse, environment injection, memory manipulation, toolchain exploitation, model tampering, and agent network attacks. It further reveals how these threats propagate and amplify through cross-domain interactions, delegation chains, and protocol-driven ecosystems. To counter these challenges, the study proposes a comprehensive defense framework encompassing threat modeling, secure decoding, permission control, runtime monitoring, and protocol-layer protections, while advancing new directions such as interoperable identity authentication, provenance tracing, and ecosystem-wide response mechanisms to establish a trustworthy foundation for large-scale autonomous agent ecosystems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly deployed as agentic systems that plan, memorize, and act in open-world environments. This shift brings new security problems: failures are no longer only unsafe text generation, but can become real harm through tool use, persistent memory, and interaction with untrusted web content. In this survey, we provide a transition-oriented view from Secure Agentic AI to a Secure Agentic Web. We first summarize a component-aligned threat taxonomy covering prompt abuse, environment injection, memory attacks, toolchain abuse, model tampering, and agent network attacks. We then review defense strategies, including prompt hardening, safety-aware decoding, privilege control for tools and APIs, runtime monitoring, continuous red-teaming, and protocol-level security mechanisms. We further discuss how these threats and mitigations escalate in the Agentic Web, where delegation chains, cross-domain interactions, and protocol-mediated ecosystems amplify risks via propagation and composition. Finally, we highlight open challenges for web-scale deployment, such as interoperable identity and authorization, provenance and traceability, ecosystem-level response, and scalable evaluation under adaptive adversaries. Our goal is to connect recent empirical findings with system-level requirements, and to outline practical research directions toward trustworthy agent ecosystems.
Problem

Research questions and friction points this paper is trying to address.

Secure Agentic AI
Agentic Web
LLM security
agent threats
web-scale deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic AI
Secure Agentic Web
Threat Taxonomy
Runtime Monitoring
Ecosystem-level Security