Goal-Driven Risk Assessment for LLM-Powered Systems: A Healthcare Case Study

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses emerging security risks in LLM-integrated systems within critical domains such as healthcare, where traditional threat modeling falls short. The authors propose a goal-driven risk assessment methodology that, for the first time, integrates LLM-specific attacks—such as prompt injection—with conventional cybersecurity threats. By leveraging attack tree modeling, the approach systematically links identified threats to concrete attack paths, prerequisite conditions, and attack vectors, yielding a context-aware and actionable structured analysis framework. Applied to a healthcare LLM agent system, the method successfully identifies and prioritizes multiple feasible attack paths, providing practical, risk-informed guidance for secure system design.

Technology Category

Application Category

📝 Abstract
While incorporating LLMs into systems offers significant benefits in critical application areas such as healthcare, new security challenges emerge due to the potential cyber kill chain cycles that combine adversarial model, prompt injection and conventional cyber attacks. Threat modeling methods enable the system designers to identify potential cyber threats and the relevant mitigations during the early stages of development. Although the cyber security community has extensive experience in applying these methods to software-based systems, the elicited threats are usually abstract and vague, limiting their effectiveness for conducting proper likelihood and impact assessments for risk prioritization, especially in complex systems with novel attacks surfaces, such as those involving LLMs. In this study, we propose a structured, goal driven risk assessment approach that contextualizes the threats with detailed attack vectors, preconditions, and attack paths through the use of attack trees. We demonstrate the proposed approach on a case study with an LLM agent-based healthcare system. This study harmonizes the state-of-the-art attacks to LLMs with conventional ones and presents possible attack paths applicable to similar systems. By providing a structured risk assessment, this study makes a significant contribution to the literature and advances the secure-by-design practices in LLM-based systems.
Problem

Research questions and friction points this paper is trying to address.

LLM-powered systems
goal-driven risk assessment
cyber kill chain
attack vectors
healthcare security
Innovation

Methods, ideas, or system contributions that make the work stand out.

goal-driven risk assessment
LLM-powered systems
attack trees
prompt injection
secure-by-design
🔎 Similar Papers
No similar papers found.