🤖 AI Summary
Binary malware summarization suffers from poor pseudocode readability, scarce training data, and unmodeled inter-procedural call relationships—leading to low usability, inaccurate explanations, and incomplete coverage. To address these challenges, this paper proposes an iterative code summarization framework. Its key contributions are: (1) the first high-quality, behavior-oriented malware summarization dataset, MalS/MalP; (2) an iterative, function-level summarization mechanism that jointly leverages malicious source code and benign pseudocode while explicitly modeling cross-function call dependencies; and (3) a dedicated evaluation metric, BLEURT-sum, and a lightweight model, MalT5 (0.77B parameters). Experiments demonstrate that MalT5 significantly outperforms existing methods across three benchmarks—matching the performance of the much larger Code-Llama—while achieving substantial improvements in usability, accuracy, and completeness.
📝 Abstract
Binary malware summarization aims to automatically generate human-readable descriptions of malware behaviors from executable files, facilitating tasks like malware cracking and detection. Previous methods based on Large Language Models (LLMs) have shown great promise. However, they still face significant issues, including poor usability, inaccurate explanations,and incomplete summaries, primarily due to the obscure pseudocode structure and the lack of malware training summaries. Further, calling relationships between functions, which involve the rich interactions within a binary malware, remain largely underexplored. To this end, we propose MALSIGHT, a novel code summarization framework that can iteratively generate descriptions of binary malware by exploring malicious source code and benign pseudocode. Specifically, we construct the first malware summary dataset, MalS and MalP, using an LLM and manually refine this dataset with human effort. At the training stage, we tune our proposed MalT5, a novel LLM-based code model, on the MalS and benign pseudocode datasets. Then, at the test stage, we iteratively feed the pseudocode functions into MalT5 to obtain the summary. Such a procedure facilitates the understanding of pseudocode structure and captures the intricate interactions between functions, thereby benefiting summaries' usability, accuracy, and completeness. Additionally, we propose a novel evaluation benchmark, BLEURT-sum, to measure the quality of summaries. Experiments on three datasets show the effectiveness of the proposed MALSIGHT. Notably, our proposed MalT5, with only 0.77B parameters, delivers comparable performance to much larger Code-Llama.