On Technique Identification and Threat-Actor Attribution using LLMs and Embedding Models

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Network attack attribution faces challenges including dense forensic documentation, time-consuming manual extraction of Tactics, Techniques, and Procedures (TTPs), and high cross-event latency. This paper proposes a lightweight end-to-end attribution framework: it leverages an off-the-shelf large language model (GPT-4) to automatically extract TTPs from threat intelligence texts; generates embeddings via text-embedding-3-large; aligns extracted TTPs with the MITRE ATT&CK knowledge base through vector retrieval to construct behavioral profiles; and supports downstream attribution model training without fine-tuning. It presents the first systematic evaluation of commercial LLMs for TTP extraction and attribution. Experiments show that while auto-extracted TTPs exhibit minor semantic deviations, their frequency distribution closely matches human annotations. The resulting attribution model significantly outperforms baselines on threat actor attribution, demonstrating the effective transferability of noisy, LLM-generated TTPs.

Technology Category

Application Category

📝 Abstract
Attribution of cyber-attacks remains a complex but critical challenge for cyber defenders. Currently, manual extraction of behavioral indicators from dense forensic documentation causes significant attribution delays, especially following major incidents at the international scale. This research evaluates large language models (LLMs) for cyber-attack attribution based on behavioral indicators extracted from forensic documentation. We test OpenAI's GPT-4 and text-embedding-3-large for identifying threat actors' tactics, techniques, and procedures (TTPs) by comparing LLM-generated TTPs against human-generated data from MITRE ATT&CK Groups. Our framework then identifies TTPs from text using vector embedding search and builds profiles to attribute new attacks for a machine learning model to learn. Key contributions include: (1) assessing off-the-shelf LLMs for TTP extraction and attribution, and (2) developing an end-to-end pipeline from raw CTI documents to threat-actor prediction. This research finds that standard LLMs generate TTP datasets with noise, resulting in a low similarity to human-generated datasets. However, the TTPs generated are similar in frequency to those within the existing MITRE datasets. Additionally, although these TTPs are different than human-generated datasets, our work demonstrates that they still prove useful for training a model that performs above baseline on attribution. Project code and files are contained here: https://github.com/kylag/ttp_attribution.
Problem

Research questions and friction points this paper is trying to address.

Automating cyber-attack attribution using LLMs for faster response
Evaluating LLMs in extracting TTPs from forensic documentation
Developing a pipeline from CTI documents to threat-actor prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLMs for cyber-attack TTP extraction
Vector embedding search for TTP identification
End-to-end pipeline for threat-actor prediction
🔎 Similar Papers
No similar papers found.