Through the Static: Demystifying Malware Visualization via Explainability

📅 2025-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address critical challenges in malware image classification—namely, the limited robustness and interpretability of CNNs, as well as poor experimental reproducibility—this work systematically reproduces six mainstream CNN architectures and conducts the first cross-model comparative analysis of class activation maps (CAMs), including Grad-CAM and HiRes-CAM, across three benchmark datasets: MalImg, Big2015, and VX-Zoo. We propose a heatmap-guided visual Transformer enhancement paradigm, boosting F1 scores by 2–8%. Our analysis reveals systematic differences in attention localization across CNN variants, enabling expert-informed malware family identification and decision attribution. The study establishes a novel methodology framework for malware visual analysis that is reproducible, interpretable, and robust.

Technology Category

Application Category

📝 Abstract
Security researchers grapple with the surge of malicious files, necessitating swift identification and classification of malware strains for effective protection. Visual classifiers and in particular Convolutional Neural Networks (CNNs) have emerged as vital tools for this task. However, issues of robustness and explainability, common in other high risk domain like medicine and autonomous vehicles, remain understudied in current literature. Although deep learning visualization classifiers presented in research obtain great results without the need for expert feature extraction, they have not been properly studied in terms of their replicability. Additionally, the literature is not clear on how these types of classifiers arrive to their answers. Our study addresses these gaps by replicating six CNN models and exploring their pitfalls. We employ Class Activation Maps (CAMs), like GradCAM and HiResCAM, to assess model explainability. We evaluate the CNNs' performance and interpretability on two standard datasets, MalImg and Big2015, and a newly created called VX-Zoo. We employ these different CAM techniques to gauge the explainability of each of the models. With these tools, we investigate the underlying factors contributing to different interpretations of inputs across the different models, empowering human researchers to discern patterns crucial for identifying distinct malware families and explain why CNN models arrive at their conclusions. Other then highlighting the patterns found in the interpretability study, we employ the extracted heatmpas to enhance Visual Transformers classifiers' performance and explanation quality. This approach yields substantial improvements in F1 score, ranging from 2% to 8%, across the datasets compared to benchmark values.
Problem

Research questions and friction points this paper is trying to address.

Addressing robustness and explainability in malware visualization classifiers.
Replicating CNN models to study their performance and interpretability.
Enhancing Visual Transformers' performance using explainability techniques.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replicates six CNN models for malware classification
Uses Class Activation Maps for model explainability
Enhances Visual Transformers with extracted heatmaps
🔎 Similar Papers
No similar papers found.