A Multi-Turn Framework for Evaluating AI Misuse in Fraud and Cybercrime Scenarios

πŸ“… 2026-02-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study evaluates the risk of large language models (LLMs) being misused in sophisticated cybercrimes such as romance scams, CEO impersonation, and identity theft. To this end, we introduce the first reproducible, multi-turn interactive evaluation framework co-designed with law enforcement and policy experts. The framework decomposes malicious intent into seemingly benign queries to assess models’ ability to generate actionable information, benchmarking against standard web search and open-source LLMs with safety safeguards removed. Our findings indicate that mainstream closed-source models offer limited assistance for high-level criminal activities; however, their risk increases substantially when safety constraints are disabled. Moreover, multi-turn indirect requests prove more effective than explicit malicious prompts at bypassing current defenses, exposing critical limitations in existing safety strategies under complex adversarial scenarios.

Technology Category

Application Category

πŸ“ Abstract
AI is increasingly being used to assist fraud and cybercrime. However, it is unclear whether current large language models can assist complex criminal activity. Working with law enforcement and policy experts, we developed multi-turn evaluations for three fraud and cybercrime scenarios (romance scams, CEO impersonation, and identity theft). Our evaluations focused on text-to-text model capabilities. In each scenario, we measured model capabilities in ways designed to resemble real-world misuse, such as breaking down requests for fraud into a sequence of seemingly benign queries, and measuring whether models provide actionable information, relative to a standard web search baseline. We found that (1) current large language models provide minimal practical assistance with complex criminal activity, (2) open-weight large language models fine-tuned to remove safety guardrails provided substantially more help, and (3) decomposing requests into benign-seeming queries elicited more assistance than explicitly malicious framing or system-level jailbreaks. Overall, the results suggest that current risks from text-generation models are relatively minimal. However, this work contributes a reproducible, expert-grounded framework for tracking how these risks may evolve with time as models grow more capable and adversaries adapt.
Problem

Research questions and friction points this paper is trying to address.

AI misuse
fraud
cybercrime
large language models
risk evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-turn evaluation
AI misuse
large language models
cybercrime scenarios
safety guardrails
πŸ”Ž Similar Papers
No similar papers found.
K
Kimberly T. Mai
AI Security Institute, UK
A
Anna Gausen
AI Security Institute, UK
M
Magda Dubois
AI Security Institute, UK
M
Mona Murad
AI Security Institute, UK
B
Bessie O'Dell
AI Security Institute, UK
N
Nadine Staes-Polet
AI Security Institute, UK
Christopher Summerfield
Christopher Summerfield
University of Oxford
Cognitive ScienceNeuroscience
A
Andrew Strait
AI Security Institute, UK