Minerva: Reinforcement Learning with Verifiable Rewards for Cyber Threat Intelligence LLMs

πŸ“… 2026-01-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models struggle to meet the accuracy and robustness requirements of automated systems when generating structured cyber threat intelligence (CTI). To address this challenge, this work introduces, for the first time, a verification-guided reinforcement learning approach tailored to CTQ tasks, proposing a unified training framework that integrates task-specific verifiers to provide verifiable rewards. The framework incorporates a lightweight self-training mechanism to alleviate reward sparsity and employs trajectory distillation to refine the model’s policy. Experimental results demonstrate that the proposed method significantly outperforms supervised fine-tuning baselines across multiple CTI subtasks, achieving substantial improvements in both accuracy and robustness of structured outputs.

Technology Category

Application Category

πŸ“ Abstract
Cyber threat intelligence (CTI) analysts routinely convert noisy, unstructured security artifacts into standardized, automation-ready representations. Although large language models (LLMs) show promise for this task, existing approaches remain brittle when producing structured CTI outputs and have largely relied on supervised fine-tuning (SFT). In contrast, CTI standards and community-maintained resources define canonical identifiers and schemas that enable deterministic verification of model outputs. We leverage this structure to study reinforcement learning with verifiable rewards (RLVR) for CTI tasks. We introduce \textit{Minerva}, a unified dataset and training pipeline spanning multiple CTI subtasks, each paired with task-specific verifiers that score structured outputs and identifier predictions. To address reward sparsity during rollout, we propose a lightweight self-training mechanism that generates additional verified trajectories and distills them back into the model. Experiments across LLM backbones show consistent improvements in accuracy and robustness over SFT across multiple benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Cyber Threat Intelligence
Large Language Models
Structured Output
Verifiable Rewards
Reinforcement Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning with Verifiable Rewards
Cyber Threat Intelligence
Structured Output Verification
Self-training with Verified Trajectories
LLM Alignment
πŸ”Ž Similar Papers
No similar papers found.