Improving Generalization on Cybersecurity Tasks with Multi-Modal Contrastive Learning

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of existing machine learning models in cybersecurity tasks, which often rely on superficial features (i.e., shortcuts) in the data. To mitigate this, we propose the first two-stage multimodal contrastive learning framework tailored for cybersecurity, leveraging textual modalities—such as vulnerability descriptions—to guide threat classification in data-scarce payload modalities. By aligning their semantic embedding spaces, our approach enables effective cross-modal knowledge transfer. We construct a synthetic benchmark dataset using CVE entries and payloads generated by large language models, and validate our method on both a private large-scale dataset and public benchmarks. Experimental results demonstrate that our framework significantly alleviates shortcut learning, achieves superior generalization performance over existing baselines, and we publicly release the code and datasets to support further research.

Technology Category

Application Category

📝 Abstract
The use of ML in cybersecurity has long been impaired by generalization issues: Models that work well in controlled scenarios fail to maintain performance in production. The root cause often lies in ML algorithms learning superficial patterns (shortcuts) rather than underlying cybersecurity concepts. We investigate contrastive multi-modal learning as a first step towards improving ML performance in cybersecurity tasks. We aim at transferring knowledge from data-rich modalities, such as text, to data-scarce modalities, such as payloads. We set up a case study on threat classification and propose a two-stage multi-modal contrastive learning framework that uses textual vulnerability descriptions to guide payload classification. First, we construct a semantically meaningful embedding space using contrastive learning on descriptions. Then, we align payloads to this space, transferring knowledge from text to payloads. We evaluate the approach on a large-scale private dataset and a synthetic benchmark built from public CVE descriptions and LLM-generated payloads. The methodology appears to reduce shortcut learning over baselines on both benchmarks. We release our synthetic benchmark and source code as open source.
Problem

Research questions and friction points this paper is trying to address.

generalization
cybersecurity
shortcut learning
multi-modal learning
machine learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-modal contrastive learning
shortcut learning
knowledge transfer
cybersecurity generalization
payload classification