🤖 AI Summary
This work proposes a multimodal deep learning framework for Android malware detection that systematically addresses the often-overlooked impact of image attributes and the underutilization of APK textual information. It presents the first comprehensive evaluation of how different image types, resolutions, and CNN architectures—including ResNet-152 and EfficientNet-B4—affect detection performance. Furthermore, the study innovatively leverages LLaMA-2 to extract semantic features from permissions and metadata and explores CLIP-based strategies for multimodal fusion. Experimental results demonstrate that high-resolution RGB images (e.g., 512×512) consistently yield optimal performance across multiple CNN backbones, while CLIP-based fusion shows limited effectiveness. These findings underscore the critical importance of carefully selecting image representations and designing effective multimodal integration mechanisms to enhance malware detection capabilities.
📝 Abstract
As zero-day Android malware attacks grow more sophisticated, recent research highlights the effectiveness of using image-based representations of malware bytecode to detect previously unseen threats. However, existing studies often overlook how image type and resolution affect detection and ignore valuable textual data in Android Application Packages (APKs), such as permissions and metadata, limiting their ability to fully capture malicious behavior. The integration of multimodality, which combines image and text data, has gained momentum as a promising approach to address these limitations. This paper proposes a multimodal deep learning framework integrating APK images and textual features to enhance Android malware detection. We systematically evaluate various image types and resolutions across different Convolutional Neural Networks (CNN) architectures, including VGG, ResNet-152, MobileNet, DenseNet, EfficientNet-B4, and use LLaMA-2, a large language model, to extract and annotate textual features for improved analysis. The findings demonstrate that RGB images at higher resolutions (e.g., 256x256, 512x512) achieve superior classification performance, while the multimodal integration of image and text using the CLIP model reveals limited potential. Overall, this research highlights the importance of systematically evaluating image attributes and integrating multimodal data to develop effective malware detection for Android systems.