🤖 AI Summary
Existing attack graph construction methods rely solely on textual data, overlooking critical threat indicators embedded in images within cyber threat intelligence (CTI) reports—leading to incomplete graph structures and compromised accuracy. To address this limitation, this paper introduces multimodal large language models (MLLMs) into attack graph generation for the first time. We propose an iterative visual question answering (VQA) parsing mechanism and a content-level image-text semantic alignment and fusion method, enabling fine-grained identification of image-based threat elements and cross-modal knowledge injection. Our approach integrates threat image parsing, cross-modal alignment, and graph structure enhancement. Experimental results across multiple CTI datasets demonstrate significant improvements: a 32.7% increase in key image information recognition accuracy and a 28.4% improvement in attack path coverage—effectively bridging the visual-semantic gap inherent in text-only methods.
📝 Abstract
Cyber Threat Intelligence (CTI) parsing aims to extract key threat information from massive data, transform it into actionable intelligence, enhance threat detection and defense efficiency, including attack graph construction, intelligence fusion and indicator extraction. Among these research topics, Attack Graph Construction (AGC) is essential for visualizing and understanding the potential attack paths of threat events from CTI reports. Existing approaches primarily construct the attack graphs purely from the textual data to reveal the logical threat relationships between entities within the attack behavioral sequence. However, they typically overlook the specific threat information inherent in visual modalities, which preserves the key threat details from inherently-multimodal CTI report. Therefore, we enhance the effectiveness of attack graph construction by analyzing visual information through Multimodal Large Language Models (MLLMs). Specifically, we propose a novel framework, MM-AttacKG, which can effectively extract key information from threat images and integrate it into attack graph construction, thereby enhancing the comprehensiveness and accuracy of attack graphs. It first employs a threat image parsing module to extract critical threat information from images and generate descriptions using MLLMs. Subsequently, it builds an iterative question-answering pipeline tailored for image parsing to refine the understanding of threat images. Finally, it achieves content-level integration between attack graphs and image-based answers through MLLMs, completing threat information enhancement. The experimental results demonstrate that MM-AttacKG can accurately identify key information in threat images and significantly improve the quality of multimodal attack graph construction, effectively addressing the shortcomings of existing methods in utilizing image-based threat information.