๐ค AI Summary
Existing Python package malware detection datasets provide only package-level labels, lacking statement-level localization of malicious behaviorโthus hindering explainable detection and attack pattern analysis. Method: We construct the first statement-level malicious Python package dataset (370 packages, 2,962 annotated malicious statements), propose a fine-grained malicious behavior taxonomy covering 47 indicators, and design a sequence pattern mining algorithm (GSP) grounded in attacker logic to systematically uncover prevalent attack chains. Annotation quality is ensured via manual expert labeling and validation. The open-source dataset comprises 833 files and 90,527 lines of code. Contribution/Results: Experiments demonstrate that our approach achieves an F1-score of 86.3% for malicious statement localization, significantly enhancing detector interpretability and the precision of heuristic rule generation.
๐ Abstract
The widespread adoption of open-source ecosystems enables developers to integrate third-party packages, but also exposes them to malicious packages crafted to execute harmful behavior via public repositories such as PyPI. Existing datasets (e.g., pypi-malregistry, DataDog, OpenSSF, MalwareBench) label packages as malicious or benign at the package level, but do not specify which statements implement malicious behavior. This coarse granularity limits research and practice: models cannot be trained to localize malicious code, detectors cannot justify alerts with code-level evidence, and analysts cannot systematically study recurring malicious indicators or attack chains. To address this gap, we construct a statement-level dataset of 370 malicious Python packages (833 files, 90,527 lines) with 2,962 labeled occurrences of malicious indicators. From these annotations, we derive a fine-grained taxonomy of 47 malicious indicators across 7 types that capture how adversarial behavior is implemented in code, and we apply sequential pattern mining to uncover recurring indicator sequences that characterize common attack workflows. Our contribution enables explainable, behavior-centric detection and supports both semantic-aware model training and practical heuristics for strengthening software supply-chain defenses.