A Survey of Agentic AI and Cybersecurity: Challenges, Opportunities and Use-case Prototypes

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the dual-edged nature of AI agents in cybersecurity, which simultaneously enhance autonomous defense capabilities—such as threat hunting and automated response—and empower sophisticated attacks, including automated reconnaissance and social engineering, thereby exposing critical gaps in current governance and security mechanisms. The work presents the first systematic survey of offensive and defensive applications of AI agents in this domain, integrating literature analysis, threat modeling, and prototype implementation to propose a targeted security framework and evaluation methodology. It identifies emerging risks such as agent collusion and memory poisoning, and demonstrates through three representative use cases how design choices in agent architecture critically influence both security posture and operational effectiveness.

Technology Category

Application Category

📝 Abstract
Agentic AI marks an important transition from single-step generative models to systems capable of reasoning, planning, acting, and adapting over long-lasting tasks. By integrating memory, tool use, and iterative decision cycles, these systems enable continuous, autonomous workflows in real-world environments. This survey examines the implications of agentic AI for cybersecurity. On the defensive side, agentic capabilities enable continuous monitoring, autonomous incident response, adaptive threat hunting, and fraud detection at scale. Conversely, the same properties amplify adversarial power by accelerating reconnaissance, exploitation, coordination, and social-engineering attacks. These dual-use dynamics expose fundamental gaps in existing governance, assurance, and accountability mechanisms, which were largely designed for non-autonomous and short-lived AI systems. To address these challenges, we survey emerging threat models, security frameworks, and evaluation pipelines tailored to agentic systems, and analyze systemic risks including agent collusion, cascading failures, oversight evasion, and memory poisoning. Finally, we present three representative use-case implementations that illustrate how agentic AI behaves in practical cybersecurity workflows, and how design choices shape reliability, safety, and operational effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Agentic AI
Cybersecurity
Dual-use
Autonomous systems
Systemic risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic AI
Cybersecurity
Autonomous Agents
Dual-use AI
Memory Poisoning
🔎 Similar Papers
No similar papers found.