🤖 AI Summary
In CTI-domain NER, retrieval-based in-context learning (ICL) suffers from unreliable implicit induction and spurious entity-type overlaps. To address this, we propose a TTP-driven explicit instruction framework. Our method leverages the MITRE ATT&CK TTP taxonomy as a semantic scaffold to construct hierarchical instruction modeling and introduces a feedback-driven instruction refinement (FIR) mechanism for dynamic adaptation to annotation dialects under few-shot settings. It integrates TTP-semantic mapping, systematic instruction engineering, and lightweight supervised fine-tuning. Evaluated on five CTI NER benchmarks, our approach consistently outperforms retrieval-based ICL baselines: with only 1% labeled data for refinement, it matches full-data fine-tuning performance; on CTINexus, Macro F1 improves by 10.91%; on LADDER, Micro F1 reaches 71.96%. This work marks a paradigm shift from implicit induction to explicit, semantics-guided reasoning in CTI NER.
📝 Abstract
The automation of Cyber Threat Intelligence (CTI) relies heavily on Named Entity Recognition (NER) to extract critical entities from unstructured text. Currently, Large Language Models (LLMs) primarily address this task through retrieval-based In-Context Learning (ICL). This paper analyzes this mainstream paradigm, revealing a fundamental flaw: its success stems not from global semantic similarity but largely from the incidental overlap of entity types within retrieved examples. This exposes the limitations of relying on unreliable implicit induction. To address this, we propose TTPrompt, a framework shifting from implicit induction to explicit instruction. TTPrompt maps the core concepts of CTI's Tactics, Techniques, and Procedures (TTPs) into an instruction hierarchy: formulating task definitions as Tactics, guiding strategies as Techniques, and annotation guidelines as Procedures. Furthermore, to handle the adaptability challenge of static guidelines, we introduce Feedback-driven Instruction Refinement (FIR). FIR enables LLMs to self-refine guidelines by learning from errors on minimal labeled data, adapting to distinct annotation dialects. Experiments on five CTI NER benchmarks demonstrate that TTPrompt consistently surpasses retrieval-based baselines. Notably, with refinement on just 1% of training data, it rivals models fine-tuned on the full dataset. For instance, on LADDER, its Micro F1 of 71.96% approaches the fine-tuned baseline, and on the complex CTINexus, its Macro F1 exceeds the fine-tuned ACLM model by 10.91%.