Uncovering Vulnerabilities of LLM-Assisted Cyber Threat Intelligence

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies three intrinsic vulnerabilities of large language models (LLMs) in cybersecurity threat intelligence (CTI) tasks—spurious correlations, contradictory knowledge, and limited generalization—rooted in the inherent characteristics of threat environments, not architectural flaws. To systematically assess these issues, we propose a novel evaluation framework integrating hierarchical sampling, autoregressive fine-tuning, and expert-coordinated validation, applied across multiple real-world CTI benchmarks and operational threat reports. We introduce the first CTI-specific vulnerability taxonomy and empirically validate how these vulnerabilities fundamentally impair model robustness. Results reveal significant performance bottlenecks of current LLMs on critical CTI tasks, including threat attribution, IOC extraction, and TTP inference. Based on these findings, we propose a principled mitigation strategy centered on environment-aware modeling, knowledge-consistency constraints, and domain-adaptive fine-tuning—providing both theoretical foundations and practical guidelines for developing trustworthy, operationally viable CTI assistance systems.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are intensively used to assist security analysts in counteracting the rapid exploitation of cyber threats, wherein LLMs offer cyber threat intelligence (CTI) to support vulnerability assessment and incident response. While recent work has shown that LLMs can support a wide range of CTI tasks such as threat analysis, vulnerability detection, and intrusion defense, significant performance gaps persist in practical deployments. In this paper, we investigate the intrinsic vulnerabilities of LLMs in CTI, focusing on challenges that arise from the nature of the threat landscape itself rather than the model architecture. Using large-scale evaluations across multiple CTI benchmarks and real-world threat reports, we introduce a novel categorization methodology that integrates stratification, autoregressive refinement, and human-in-the-loop supervision to reliably analyze failure instances. Through extensive experiments and human inspections, we reveal three fundamental vulnerabilities: spurious correlations, contradictory knowledge, and constrained generalization, that limit LLMs in effectively supporting CTI. Subsequently, we provide actionable insights for designing more robust LLM-powered CTI systems to facilitate future research.
Problem

Research questions and friction points this paper is trying to address.

Investigating intrinsic LLM vulnerabilities in cyber threat intelligence applications
Identifying spurious correlations and knowledge contradictions in CTI systems
Addressing constrained generalization issues in LLM-assisted security analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stratification and autoregressive refinement for categorization
Human-in-the-loop supervision for failure analysis
Large-scale evaluation across CTI benchmarks
🔎 Similar Papers
No similar papers found.