🤖 AI Summary
Existing network deception tools suffer from poor portability and limited extensibility, hindering rapid experimentation and evaluation of deception strategies. Method: This paper proposes an environment-agnostic, high-level deception strategy modeling framework that automatically infers attack paths via an attack graph service and enables end-to-end mapping from an abstract strategy layer to a simulation execution layer. The system integrates four core components: an action planner, an observability module, an environment state service, and the attack graph service—supporting large-scale what-if scenario testing in realistic network simulations. Contribution/Results: Evaluated across 55 diverse deception scenarios, the framework demonstrates effectiveness in uncovering trade-offs among multiple defense strategies and significantly reduces the cost of strategy design and experimental iteration.
📝 Abstract
Cyber deception aims to distract, delay, and detect network attackers with fake assets such as honeypots, decoy credentials, or decoy files. However, today, it is difficult for operators to experiment, explore, and evaluate deception approaches. Existing tools and platforms have non-portable and complex implementations that are difficult to modify and extend. We address this pain point by introducing Perry, a high-level framework that accelerates the design and exploration of deception what-if scenarios. Perry has two components: a high-level abstraction layer for security operators to specify attackers and deception strategies, and an experimentation module to run these attackers and defenders in realistic emulated networks. To translate these high-level specifications we design four key modules for Perry: 1) an action planner that translates high-level actions into low-level implementations, 2) an observability module to translate low-level telemetry into high-level observations, 3) an environment state service that enables environment agnostic strategies, and 4) an attack graph service to reason about how attackers could explore an environment. We illustrate that Perry's abstractions reduce the implementation effort of exploring a wide variety of deception defenses, attackers, and environments. We demonstrate the value of Perry by emulating 55 unique deception what-if scenarios and illustrate how these experiments enable operators to shed light on subtle tradeoffs.