DRMD: Deep Reinforcement Learning for Malware Detection under Concept Drift

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation due to concept drift, high annotation costs, and large prediction uncertainty in Android malware detection, this paper proposes a deep reinforcement learning (DRL)-based joint detection-and-rejection framework. We formulate malware detection as a single-step Markov decision process for the first time, jointly optimizing classification outputs and active rejection of high-risk samples. The framework integrates online learning, time-aware evaluation, and multi-stage active training—enabling fully autonomous adaptation to distributional shifts without human intervention. Evaluated on realistic dynamic datasets, our approach significantly improves long-term stability. Experiments demonstrate an average improvement of 5.18–14.49 in Area Under Time (AUT), a temporal performance metric, over state-of-the-art methods, validating its enhanced robustness and practicality under limited annotation budgets.

Technology Category

Application Category

📝 Abstract
Malware detection in real-world settings must deal with evolving threats, limited labeling budgets, and uncertain predictions. Traditional classifiers, without additional mechanisms, struggle to maintain performance under concept drift in malware domains, as their supervised learning formulation cannot optimize when to defer decisions to manual labeling and adaptation. Modern malware detection pipelines combine classifiers with monthly active learning (AL) and rejection mechanisms to mitigate the impact of concept drift. In this work, we develop a novel formulation of malware detection as a one-step Markov Decision Process and train a deep reinforcement learning (DRL) agent, simultaneously optimizing sample classification performance and rejecting high-risk samples for manual labeling. We evaluated the joint detection and drift mitigation policy learned by the DRL-based Malware Detection (DRMD) agent through time-aware evaluations on Android malware datasets subject to realistic drift requiring multi-year performance stability. The policies learned under these conditions achieve a higher Area Under Time (AUT) performance compared to standard classification approaches used in the domain, showing improved resilience to concept drift. Specifically, the DRMD agent achieved a $5.18pm5.44$, $14.49pm12.86$, and $10.06pm10.81$ average AUT performance improvement for the classification only, classification with rejection, and classification with rejection and AL settings, respectively. Our results demonstrate for the first time that DRL can facilitate effective malware detection and improved resiliency to concept drift in the dynamic environment of the Android malware domain.
Problem

Research questions and friction points this paper is trying to address.

Detecting malware under evolving threats with limited labeling
Optimizing classification and rejection for manual labeling decisions
Improving resilience to concept drift in Android malware detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep reinforcement learning for malware detection
Simultaneous optimization of classification and rejection
Improved resilience to concept drift