Beyond Single Reports: Evaluating Automated ATT&CK Technique Extraction in Multi-Report Campaign Settings

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitations of existing automated approaches, which typically extract MITRE ATT&CK techniques from individual cyber threat intelligence (CTI) reports and thus fail to comprehensively characterize the full spectrum of tactics and techniques employed in large-scale attack campaigns, resulting in insufficient defensive coverage. For the first time, this work systematically evaluates the impact of aggregating multiple CTI reports on ATT&CK technique extraction performance. By integrating 90 CTI reports and reproducing 29 state-of-the-art methods—including named entity recognition, encoder-based classifiers, and large language models—the study benchmarks their effectiveness across three major incidents: SolarWinds, XZ Utils, and Log4j. Results demonstrate that multi-report aggregation improves F1 scores by 26% on average (reaching 78.6% for SolarWinds and 54.9% for XZ Utils), with performance saturating after 5–15 reports; however, misclassification among semantically similar techniques continues to significantly undermine defensive control coverage.
📝 Abstract
Large-scale cyberattacks, referred to as campaigns, are documented across multiple CTI reports from diverse sources, with some providing a high-level overview of attack techniques and others providing technical details. Extracting attack techniques from reports is essential for organizations to identify the controls required to protect against attacks. Manually extracting techniques at scale is impractical. Existing automated methods focus on single reports, leaving many attack techniques and their controls undetected, resulting in a fragmented view of campaign behavior. The goal of this study is to aid security researchers in extracting attack techniques and controls from a campaign by replicating and comparing the performance of the state-of-the-art ATT&CK technique extraction methods in a multi-report campaign setting compared to prior single-report evaluations. We conduct an empirical study of 29 methods to extract attack techniques, spanning named entity recognition (NER), encoder-based classification, and decoder-based LLM approaches. Our study analyzes 90 CTI reports across three major attack campaigns: SolarWinds, XZ Utils, and Log4j, using both quantitative performance metrics and their impact on controls. Our results show that aggregating multiple CTI reports improves the F1 score by about 26% over single-report analysis, with most approaches reaching performance saturation after 5--15 reports. Despite these gains, extraction performance remains limited, with maximum F1 scores of 78.6% for SolarWinds and 54.9% for XZ Utils. Moreover, up to 33.3% of misclassifications involve semantically similar techniques that share tactics and overlap in descriptions. The misclassification has a disproportionate effect on control coverage. Reports that are longer and include technical details consistently perform better, even though their readability scores are low.
Problem

Research questions and friction points this paper is trying to address.

ATT&CK technique extraction
multi-report campaign
cyber threat intelligence
attack behavior coverage
security controls
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-report analysis
ATT&CK technique extraction
cyber threat intelligence (CTI)
empirical evaluation
campaign-level assessment
🔎 Similar Papers
No similar papers found.