TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time (Extended Version)

📅 2024-02-02
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
In malware detection, high F1 scores are often severely inflated due to spatial bias (distributional distortion) and temporal bias (inappropriate time partitioning), causing rapid performance degradation in real-world dynamic environments. To address this, we propose TESSERACT—the first fairness-aware evaluation framework for bias correction. It introduces AUT (Area Under the Temporal robustness curve) as a novel metric for temporal robustness; establishes dual spatial-temporal constraints for experimental design; and proposes a data optimization algorithm integrating distribution alignment and periodicity-aware tuning. Evaluated on five-year Android (259,230 samples), Windows PE, and PDF datasets using time-aware splitting and continual retraining, TESSERACT systematically exposes significant biases in over 20 state-of-the-art studies. It improves F1 stability by 37% and delays performance decay by more than six months. Crucially, AUT strongly correlates with real-world deployment outcomes.

Technology Category

Application Category

📝 Abstract
Machine learning (ML) plays a pivotal role in detecting malicious software. Despite the high F1-scores reported in numerous studies reaching upwards of 0.99, the issue is not completely solved. Malware detectors often experience performance decay due to constantly evolving operating systems and attack methods, which can render previously learned knowledge insufficient for accurate decision-making on new inputs. This paper argues that commonly reported results are inflated due to two pervasive sources of experimental bias in the detection task: spatial bias caused by data distributions that are not representative of a real-world deployment; and temporal bias caused by incorrect time splits of data, leading to unrealistic configurations. To address these biases, we introduce a set of constraints for fair experiment design, and propose a new metric, AUT, for classifier robustness in real-world settings. We additionally propose an algorithm designed to tune training data to enhance classifier performance. Finally, we present TESSERACT, an open-source framework for realistic classifier comparison. Our evaluation encompasses both traditional ML and deep learning methods, examining published works on an extensive Android dataset with 259,230 samples over a five-year span. Additionally, we conduct case studies in the Windows PE and PDF domains. Our findings identify the existence of biases in previous studies and reveal that significant performance enhancements are possible through appropriate, periodic tuning. We explore how mitigation strategies may support in achieving a more stable and better performance over time by employing multiple strategies to delay performance decay.
Problem

Research questions and friction points this paper is trying to address.

Addresses performance decay in malware detectors due to evolving threats
Identifies spatial and temporal biases in malware classification experiments
Proposes fair experiment design and new robustness metric AUT
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constraints for fair experiment design
AUT metric for classifier robustness
Algorithm to tune training data
🔎 Similar Papers
No similar papers found.
Z
Zeliang Kan
King’s College London, UK and University College London, UK
S
Shae McFadden
King’s College London, UK and The Alan Turing Institute, UK
Daniel Arp
Daniel Arp
Technische Universität Wien
Computer SecurityMachine LearningMalware Detection
F
Feargus Pendlebury
University College London, UK
R
Roberto Jordaney
Independent Researcher, UK
J
Johannes Kinder
Ludwig-Maximilians-Universität München, Germany
Fabio Pierazzi
Fabio Pierazzi
Associate Professor at University College London
Systems SecurityMalware AnalysisConcept DriftAdversarial MLProblem-Space Attacks
Lorenzo Cavallaro
Lorenzo Cavallaro
University College London
Systems SecurityAdversarial Machine LearningAI SecurityTrustworthy Machine Learning