π€ AI Summary
Large language models struggle to meet the accuracy and robustness requirements of automated systems when generating structured cyber threat intelligence (CTI). To address this challenge, this work introduces, for the first time, a verification-guided reinforcement learning approach tailored to CTQ tasks, proposing a unified training framework that integrates task-specific verifiers to provide verifiable rewards. The framework incorporates a lightweight self-training mechanism to alleviate reward sparsity and employs trajectory distillation to refine the modelβs policy. Experimental results demonstrate that the proposed method significantly outperforms supervised fine-tuning baselines across multiple CTI subtasks, achieving substantial improvements in both accuracy and robustness of structured outputs.
π Abstract
Cyber threat intelligence (CTI) analysts routinely convert noisy, unstructured security artifacts into standardized, automation-ready representations. Although large language models (LLMs) show promise for this task, existing approaches remain brittle when producing structured CTI outputs and have largely relied on supervised fine-tuning (SFT). In contrast, CTI standards and community-maintained resources define canonical identifiers and schemas that enable deterministic verification of model outputs. We leverage this structure to study reinforcement learning with verifiable rewards (RLVR) for CTI tasks. We introduce \textit{Minerva}, a unified dataset and training pipeline spanning multiple CTI subtasks, each paired with task-specific verifiers that score structured outputs and identifier predictions. To address reward sparsity during rollout, we propose a lightweight self-training mechanism that generates additional verified trajectories and distills them back into the model. Experiments across LLM backbones show consistent improvements in accuracy and robustness over SFT across multiple benchmarks.