Semantic-Aware Contrastive Fine-Tuning: Boosting Multimodal Malware Classification with Discriminative Embeddings

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the semantic misalignment and embedding confusion between LLM-generated textual descriptions and binary behavioral semantics in fine-grained malware family classification, this paper proposes a semantic-aware contrastive fine-tuning framework. Methodologically, it integrates contrastive learning, Model-Agnostic Meta-Learning (MAML), hard negative mining, and few-shot classification. Its key contributions are: (1) a novel two-level hard negative sampling strategy—selecting high- and medium-similarity negatives based on cosine similarity—to jointly enhance discriminability and embedding diversity; and (2) cross-modal alignment between textual descriptions and binary features. Evaluated on CIC-AndMal-2020, the method achieves 63.15% accuracy with only 20 samples per class, outperforming baselines by 11–21 percentage points. Moreover, attribute-aware textual descriptions generalize effectively to unseen malware variants; ablation studies confirm that the proposed sampling strategy improves performance by 10–23% over random sampling.

Technology Category

Application Category

📝 Abstract
The rapid evolution of malware variants requires robust classification methods to enhance cybersecurity. While Large Language Models (LLMs) offer potential for generating malware descriptions to aid family classification, their utility is limited by semantic embedding overlaps and misalignment with binary behavioral features. We propose a contrastive fine-tuning (CFT) method that refines LLM embeddings via targeted selection of hard negative samples based on cosine similarity, enabling LLMs to distinguish between closely related malware families. Our approach combines high-similarity negatives to enhance discriminative power and mid-tier negatives to increase embedding diversity, optimizing both precision and generalization. Evaluated on the CIC-AndMal-2020 and BODMAS datasets, our refined embeddings are integrated into a multimodal classifier within a Model-Agnostic Meta-Learning (MAML) framework on a few-shot setting. Experiments demonstrate significant improvements: our method achieves 63.15% classification accuracy with as few as 20 samples on CIC-AndMal-2020, outperforming baselines by 11--21 percentage points and surpassing prior negative sampling strategies. Ablation studies confirm the superiority of similarity-based selection over random sampling, with gains of 10-23%. Additionally, fine-tuned LLMs generate attribute-aware descriptions that generalize to unseen variants, bridging textual and binary feature gaps. This work advances malware classification by enabling nuanced semantic distinctions and provides a scalable framework for adapting LLMs to cybersecurity challenges.
Problem

Research questions and friction points this paper is trying to address.

Enhancing malware classification via semantic-aware contrastive fine-tuning
Addressing embedding overlaps in LLMs for malware family distinction
Improving few-shot learning accuracy in multimodal malware detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive fine-tuning refines LLM embeddings discriminatively
Combines high and mid-similarity negatives for diversity
Integrates embeddings into MAML framework for few-shot learning