SoK: Honeypots & LLMs, More Than the Sum of Their Parts?

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The longstanding trade-off between low risk and high fidelity in honeypot design remains unresolved. Method: This work systematically investigates the feasibility and implementation pathways of leveraging large language models (LLMs) to enhance honeypots, proposing an LLM-augmented high-fidelity deception architecture. It introduces a honeypot detection vector classification framework, establishes a standardized design paradigm, and outlines a phased evolution roadmap—from log compression to intelligent deception generation. The approach integrates LLMs with camouflage techniques, adversarial detection, and knowledge distillation. Contribution/Results: This study delivers the first comprehensive survey in the field, clarifying key technical challenges and evaluation criteria. It further proposes a research roadmap toward next-generation autonomous, self-evolving, and self-optimizing network deception defenses.

Technology Category

Application Category

📝 Abstract
The advent of Large Language Models (LLMs) promised to resolve the long-standing paradox in honeypot design: achieving high-fidelity deception with low operational risk. However, despite a flurry of research since late 2022, progress has been incremental, and the field lacks a cohesive understanding of the emerging architectural patterns, core challenges, and evaluation paradigms. To fill this gap, this Systematization of Knowledge (SoK) paper provides the first comprehensive overview of this new domain. We survey and systematize three critical, intersecting research areas: first, we provide a taxonomy of honeypot detection vectors, structuring the core problems that LLM-based realism must solve; second, we synthesize the emerging literature on LLM-honeypots, identifying a canonical architecture and key evaluation trends; and third, we chart the evolutionary path of honeypot log analysis, from simple data reduction to automated intelligence generation. We synthesize these findings into a forward-looking research roadmap, arguing that the true potential of this technology lies in creating autonomous, self-improving deception systems to counter the emerging threat of intelligent, automated attackers.
Problem

Research questions and friction points this paper is trying to address.

Resolving honeypot design paradox with high-fidelity deception and low risk
Systematizing architectural patterns, challenges, and evaluation paradigms for LLM-honeypots
Creating autonomous deception systems against intelligent automated attackers
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs enhance honeypot deception with low risk
Systematizes taxonomy and architecture of LLM-honeypots
Proposes autonomous self-improving deception systems
🔎 Similar Papers
No similar papers found.
Robert A. Bridges
Robert A. Bridges
Mathematician & Innovation Leader, AI Sweden
differentially private machine learningcontrol theory for system stability
T
Thomas R. Mitchell
Security R&D, Volvo Group, Gothenburg, Sweden
M
Mauricio Muñoz
AI Labs, AI Sweden, Gothenburg, Sweden
T
Ted Henriksson
AI Labs, AI Sweden, Gothenburg, Sweden