ExCyTIn-Bench: Evaluating LLM agents on Cyber Threat Investigation

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the evaluation of large language model (LLM)-based agents in cybersecurity threat investigation, introducing the first benchmark specifically designed for multi-hop evidence tracing. Methodologically, it models investigation graphs based on real-world attack scenarios, proposes a node-pair-driven question generation mechanism, and leverages expert-defined rules to extract heterogeneous logs, construct interpretable ground-truth annotations, and enable programmatic task evaluation—facilitating reinforcement learning training. Key contributions include: (1) releasing the first open-source threat investigation graph benchmark, covering eight multi-step attack types and 57 log tables; (2) enabling scalable, verifiable automated evaluation; and (3) empirical results showing that state-of-the-art models achieve only an average score of 0.249 (max 0.368), revealing critical bottlenecks in reasoning and evidence integration capabilities.

Technology Category

Application Category

📝 Abstract
We present ExCyTIn-Bench, the first benchmark to Evaluate an LLM agent x on the task of Cyber Threat Investigation through security questions derived from investigation graphs. Real-world security analysts must sift through a large number of heterogeneous alert signals and security logs, follow multi-hop chains of evidence, and compile an incident report. With the developments of LLMs, building LLM-based agents for automatic thread investigation is a promising direction. To assist the development and evaluation of LLM agents, we construct a dataset from a controlled Azure tenant that covers 8 simulated real-world multi-step attacks, 57 log tables from Microsoft Sentinel and related services, and 589 automatically generated questions. We leverage security logs extracted with expert-crafted detection logic to build threat investigation graphs, and then generate questions with LLMs using paired nodes on the graph, taking the start node as background context and the end node as answer. Anchoring each question to these explicit nodes and edges not only provides automatic, explainable ground truth answers but also makes the pipeline reusable and readily extensible to new logs. This also enables the automatic generation of procedural tasks with verifiable rewards, which can be naturally extended to training agents via reinforcement learning. Our comprehensive experiments with different models confirm the difficulty of the task: with the base setting, the average reward across all evaluated models is 0.249, and the best achieved is 0.368, leaving substantial headroom for future research. Code and data are coming soon!
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM agents for cyber threat investigation tasks
Building benchmark with real-world multi-step attack simulations
Automating security log analysis and incident reporting
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM agents for cyber threat investigation
Dataset from Azure tenant with attacks
Automatic question generation with explainable answers
🔎 Similar Papers
No similar papers found.
Y
Yiran Wu
Pennsylvania State University
M
Mauricio Velazco
Microsoft Security AI Research
Andrew Zhao
Andrew Zhao
Tsinghua University
Reinforcement LearningLanguage AgentReasoning
M
Manuel Raúl Meléndez Luján
Microsoft Security AI Research
S
Srisuma Movva
Microsoft Security AI Research
Y
Yogesh K Roy
Microsoft Security AI Research
Q
Quang Nguyen
Microsoft Security AI Research
R
Roberto Rodriguez
Microsoft Security AI Research
Qingyun Wu
Qingyun Wu
The Pennsylvania State University
Agentic AI
Michael Albada
Michael Albada
Microsoft
Deep LearningMachine LearningNatural Language Processing
Julia Kiseleva
Julia Kiseleva
Microsoft Research
EvaluationNatural Language ProcessingEmbodied AgentsArtificial Intelligence
Anand Mudgerikar
Anand Mudgerikar
Microsoft Security AI Research
Information SecurityComputer NetworksMachine LearningCryptographySecurity and Privacy