🤖 AI Summary
This study systematically identifies, for the first time, cognitive biases exhibited by human red-team operators during cyber-attack decision-making and quantifies their impact on offensive behavior. We conducted three large-scale human-subject experiments (July 2024–March 2025) in an enterprise-scale network environment simulated on the SimSpace Cyber Force platform, augmented with Kali Linux attack nodes. Data from 19–20 proficient attackers—including terminal logs, network traffic traces, keystroke sequences, operational notes, and validated psychometric survey responses—were collected multimodally. A high-quality, expert-annotated dataset has been publicly released on IEEE Dataport. Our work bridges a critical gap in bias-aware cybersecurity modeling, establishing a foundational framework for cognition-informed red-blue team evaluation, automated bias detection in adversarial operations, and human-AI collaborative defense systems.
📝 Abstract
We present three large-scale human-subjects red-team cyber range datasets from the Guarding Against Malicious Biased Threats (GAMBiT) project. Across Experiments 1-3 (July 2024-March 2025), 19-20 skilled attackers per experiment conducted two 8-hour days of self-paced operations in a simulated enterprise network (SimSpace Cyber Force Platform) while we captured multi-modal data: self-reports (background, demographics, psychometrics), operational notes, terminal histories, keylogs, network packet captures (PCAP), and NIDS alerts (Suricata). Each participant began from a standardized Kali Linux VM and pursued realistic objectives (e.g., target discovery and data exfiltration) under controlled constraints. Derivative curated logs and labels are included. The combined release supports research on attacker behavior modeling, bias-aware analytics, and method benchmarking. Data are available via IEEE Dataport entries for Experiments 1-3.