Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse

📅 2025-12-09
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of quantifying how AI misuse exacerbates cybersecurity risks. We develop nine quantitative risk models grounded in the MITRE ATT&CK framework to systematically assess AI’s impact on attack scale, frequency, success rate, and impact severity. Methodologically, we propose a novel dual-source uncertainty estimation framework integrating Delphi expert elicitation with LLM-simulated expert reasoning, and map AI benchmark scores (e.g., Cybench, BountyBench) to interpretable risk factors to decompose and attribute cross-dimensional attack efficacy uplift. Using Monte Carlo aggregation, we generate auditable, confidence-interval–bounded results. Our approach enables dynamic defense prioritization and evidence-based AI governance decisions—marking the first step toward verifiable, debatable, and iterative quantitative AI security risk assessment, moving beyond qualitative descriptions.

Technology Category

Application Category

📝 Abstract
Advanced AI systems offer substantial benefits but also introduce risks. In 2025, AI-enabled cyber offense has emerged as a concrete example. This technical report applies a quantitative risk modeling methodology (described in full in a companion paper) to this domain. We develop nine detailed cyber risk models that allow analyzing AI uplift as a function of AI benchmark performance. Each model decomposes attacks into steps using the MITRE ATT&CK framework and estimates how AI affects the number of attackers, attack frequency, probability of success, and resulting harm to determine different types of uplift. To produce these estimates with associated uncertainty, we employ both human experts, via a Delphi study, as well as LLM-based simulated experts, both mapping benchmark scores (from Cybench and BountyBench) to risk model factors. Individual estimates are aggregated through Monte Carlo simulation. The results indicate systematic uplift in attack efficacy, speed, and target reach, with different mechanisms of uplift across risk models. We aim for our quantitative risk modeling to fulfill several aims: to help cybersecurity teams prioritize mitigations, AI evaluators design benchmarks, AI developers make more informed deployment decisions, and policymakers obtain information to set risk thresholds. Similar goals drove the shift from qualitative to quantitative assessment over time in other high-risk industries, such as nuclear power. We propose this methodology and initial application attempt as a step in that direction for AI risk management. While our estimates carry significant uncertainty, publishing detailed quantified results can enable experts to pinpoint exactly where they disagree. This helps to collectively refine estimates, something that cannot be done with qualitative assessments alone.
Problem

Research questions and friction points this paper is trying to address.

Quantitatively models cybersecurity risks from AI misuse in cyberattacks.
Analyzes AI's impact on attack efficacy, speed, and target reach.
Aims to guide mitigation prioritization, benchmark design, and policy decisions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantitative risk modeling using AI benchmark performance
Combining human and LLM experts via Delphi study
Monte Carlo simulation for aggregating uncertainty estimates
🔎 Similar Papers
No similar papers found.
S
Steve Barrett
SaferAI
M
Malcolm Murray
SaferAI
O
Otter Quarks
SaferAI
Matthew Smith
Matthew Smith
SaferAI
J
Jakub Krýs
SaferAI
S
Siméeon Campos
SaferAI
A
Alejandro Tlaie Boria
Pour Demain
C
Chlóe Touzet
SaferAI
S
Sevan Hayrapet
0labs
F
Fred Heiding
Harvard Kennedy School
O
Omer Nevo
Irregular
A
Adam Swanda
Cisco
J
Jair Aguirre
RAND
A
Asher Brass Gershovich
Institute for AI Policy and Strategy
E
Eric Clay
Flare
R
Ryan Fetterman
Cisco
Mario Fritz
Mario Fritz
Faculty CISPA Helmholtz Center for Information Security; Professor Saarland University
Computer VisionMachine LearningTrustworthy AISecurityPrivacy
Marc Juarez
Marc Juarez
Assistant Professor, School of Informatics - University of Edinburgh
privacysecuritynetworksmachine learningalgorithmic bias
Vasilios Mavroudis
Vasilios Mavroudis
Research Scientist, Alan Turing Institute
Machine LearningSystems SecurityArtificial Intelligence
H
Henry Papadatos
SaferAI