ASTRIDE: A Security Threat Modeling Platform for Agentic-AI Applications

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
AI agent systems introduce novel security threats—including prompt injection, context pollution, and tool misuse—that remain unaddressed by the conventional STRIDE threat modeling framework. Method: This paper proposes the first automated threat modeling platform tailored for AI agents. It extends STRIDE with agent-specific threat categories and introduces a VLM-LLM collaborative architecture: a fine-grained vision-language model parses system architecture diagrams, while an OpenAI-gpt-oss-based large language model performs threat inference and root-cause attribution; an LLM-powered orchestrator manages the end-to-end workflow. Contribution/Results: Experimental evaluation across diverse AI agent systems demonstrates high-accuracy, scalable, and interpretable threat identification. The platform significantly improves the efficiency and rigor of security analysis for intelligent systems, enabling systematic, automation-assisted threat modeling for next-generation AI agents.

Technology Category

Application Category

📝 Abstract
AI agent-based systems are becoming increasingly integral to modern software architectures, enabling autonomous decision-making, dynamic task execution, and multimodal interactions through large language models (LLMs). However, these systems introduce novel and evolving security challenges, including prompt injection attacks, context poisoning, model manipulation, and opaque agent-to-agent communication, that are not effectively captured by traditional threat modeling frameworks. In this paper, we introduce ASTRIDE, an automated threat modeling platform purpose-built for AI agent-based systems. ASTRIDE extends the classical STRIDE framework by introducing a new threat category, A for AI Agent-Specific Attacks, which encompasses emerging vulnerabilities such as prompt injection, unsafe tool invocation, and reasoning subversion, unique to agent-based applications. To automate threat modeling, ASTRIDE combines a consortium of fine-tuned vision-language models (VLMs) with the OpenAI-gpt-oss reasoning LLM to perform end-to-end analysis directly from visual agent architecture diagrams, such as data flow diagrams(DFDs). LLM agents orchestrate the end-to-end threat modeling automation process by coordinating interactions between the VLM consortium and the reasoning LLM. Our evaluations demonstrate that ASTRIDE provides accurate, scalable, and explainable threat modeling for next-generation intelligent systems. To the best of our knowledge, ASTRIDE is the first framework to both extend STRIDE with AI-specific threats and integrate fine-tuned VLMs with a reasoning LLM to fully automate diagram-driven threat modeling in AI agent-based applications.
Problem

Research questions and friction points this paper is trying to address.

Addresses security challenges in AI agent systems not covered by traditional frameworks
Automates threat modeling for agentic-AI applications using visual architecture diagrams
Extends STRIDE framework with AI-specific threats like prompt injection attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends STRIDE with AI-specific threat category A
Combines fine-tuned VLMs with reasoning LLM for automation
Automates threat modeling from visual architecture diagrams
🔎 Similar Papers
No similar papers found.
Eranga Bandara
Eranga Bandara
Researcher | Engineer
Privacy-Preserving AIDistributed SystemsNeuroscienceBlockchain5G
A
Amin Hass
Accenture Technology Labs, Arlington, VA, USA
Ross Gore
Ross Gore
Research Associate Professor, Old Dominion University
Software DebuggingData SciencePredictive AnalyticsModeling and Simulation
Sachin Shetty
Sachin Shetty
Old Dominion University
BlockchainCyber ResilienceTrustworthy AI
R
Ravi Mukkamala
Old Dominion University, Norfolk, VA, USA
S
Safdar H. Bouk
Old Dominion University, Norfolk, VA, USA
X
Xueping Liang
Florida International University, USA
N
Ng Wee Keong
Nanyang Technological University, Singapore
Kasun De Zoysa
Kasun De Zoysa
Deputy Director/Professor in Computer Science at University of Colombo School of Computing (UCSC)
Information SecurityCryptographyDigital ForensicICT4D5G
A
Aruna Withanage
Effectz.AI
N
Nilaan Loganathan
Effectz.AI