π€ AI Summary
This study evaluates the risk of large language models (LLMs) being misused in sophisticated cybercrimes such as romance scams, CEO impersonation, and identity theft. To this end, we introduce the first reproducible, multi-turn interactive evaluation framework co-designed with law enforcement and policy experts. The framework decomposes malicious intent into seemingly benign queries to assess modelsβ ability to generate actionable information, benchmarking against standard web search and open-source LLMs with safety safeguards removed. Our findings indicate that mainstream closed-source models offer limited assistance for high-level criminal activities; however, their risk increases substantially when safety constraints are disabled. Moreover, multi-turn indirect requests prove more effective than explicit malicious prompts at bypassing current defenses, exposing critical limitations in existing safety strategies under complex adversarial scenarios.
π Abstract
AI is increasingly being used to assist fraud and cybercrime. However, it is unclear whether current large language models can assist complex criminal activity. Working with law enforcement and policy experts, we developed multi-turn evaluations for three fraud and cybercrime scenarios (romance scams, CEO impersonation, and identity theft). Our evaluations focused on text-to-text model capabilities. In each scenario, we measured model capabilities in ways designed to resemble real-world misuse, such as breaking down requests for fraud into a sequence of seemingly benign queries, and measuring whether models provide actionable information, relative to a standard web search baseline.
We found that (1) current large language models provide minimal practical assistance with complex criminal activity, (2) open-weight large language models fine-tuned to remove safety guardrails provided substantially more help, and (3) decomposing requests into benign-seeming queries elicited more assistance than explicitly malicious framing or system-level jailbreaks. Overall, the results suggest that current risks from text-generation models are relatively minimal. However, this work contributes a reproducible, expert-grounded framework for tracking how these risks may evolve with time as models grow more capable and adversaries adapt.