🤖 AI Summary
This study addresses the empirical quantification of attacker cognitive biases in cybersecurity, specifically examining how loss aversion shapes dynamic attack decision-making. To overcome the limitation of existing methods—which struggle to dynamically infer underlying psychological mechanisms from real-world attack behaviors—we propose the first application of large language models (LLMs) to segment hacker operational notes, identify intent, and construct fine-grained cognitive models, integrated with a persistence-oriented attack framework for trigger-condition correlation analysis. Experimental results demonstrate that loss aversion significantly influences critical attacker decisions—including resource allocation, path backtracking, and target switching—and that LLMs effectively capture its temporal manifestations. Our work establishes a novel paradigm for behaviorally grounded, real-time adaptive defense and provides a rigorously verifiable methodology for cognitive-behavioral modeling in adversarial contexts.
📝 Abstract
Understanding and quantifying human cognitive biases from empirical data has long posed a formidable challenge, particularly in cybersecurity, where defending against unknown adversaries is paramount. Traditional cyber defense strategies have largely focused on fortification, while some approaches attempt to anticipate attacker strategies by mapping them to cognitive vulnerabilities, yet they fall short in dynamically interpreting attacks in progress. In recognition of this gap, IARPA's ReSCIND program seeks to infer, defend against, and even exploit attacker cognitive traits. In this paper, we present a novel methodology that leverages large language models (LLMs) to extract quantifiable insights into the cognitive bias of loss aversion from hacker behavior. Our data are collected from an experiment in which hackers were recruited to attack a controlled demonstration network. We process the hacker generated notes using LLMs using it to segment the various actions and correlate the actions to predefined persistence mechanisms used by hackers. By correlating the implementation of these mechanisms with various operational triggers, our analysis provides new insights into how loss aversion manifests in hacker decision-making. The results demonstrate that LLMs can effectively dissect and interpret nuanced behavioral patterns, thereby offering a transformative approach to enhancing cyber defense strategies through real-time, behavior-based analysis.