AthenaBench: A Dynamic Benchmark for Evaluating LLMs in Cyber Threat Intelligence

📅 2025-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) exhibit critical capability bottlenecks in cyber threat intelligence (CTI) analysis—particularly in deep-reasoning tasks such as unstructured report comprehension, threat actor attribution, and risk mitigation planning. To address this gap, we propose AthenaBench, the first dynamic multi-task evaluation benchmark specifically designed for CTI. It introduces an enhanced dataset construction pipeline, semantic deduplication, fine-grained evaluation metrics, and—novelty—the first dedicated task for risk mitigation strategy generation. Comprehensive evaluation across state-of-the-art models—including GPT-5, Gemini 2.5 Pro, and leading open-source LLMs (e.g., LLaMA, Qwen)—reveals substantial deficiencies in core CTI reasoning capabilities. These findings underscore the necessity of domain-specific evaluation frameworks and provide both a rigorous benchmark and clear technical direction for developing CTI-specialized LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated strong capabilities in natural language reasoning, yet their application to Cyber Threat Intelligence (CTI) remains limited. CTI analysis involves distilling large volumes of unstructured reports into actionable knowledge, a process where LLMs could substantially reduce analyst workload. CTIBench introduced a comprehensive benchmark for evaluating LLMs across multiple CTI tasks. In this work, we extend CTIBench by developing AthenaBench, an enhanced benchmark that includes an improved dataset creation pipeline, duplicate removal, refined evaluation metrics, and a new task focused on risk mitigation strategies. We evaluate twelve LLMs, including state-of-the-art proprietary models such as GPT-5 and Gemini-2.5 Pro, alongside seven open-source models from the LLaMA and Qwen families. While proprietary LLMs achieve stronger results overall, their performance remains subpar on reasoning-intensive tasks, such as threat actor attribution and risk mitigation, with open-source models trailing even further behind. These findings highlight fundamental limitations in the reasoning capabilities of current LLMs and underscore the need for models explicitly tailored to CTI workflows and automation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' performance in cyber threat intelligence tasks
Addressing limitations in reasoning for threat attribution and mitigation
Developing enhanced benchmark for CTI-specific model assessment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced benchmark with improved dataset creation pipeline
Incorporated duplicate removal and refined evaluation metrics
Added new task focused on risk mitigation strategies
🔎 Similar Papers