Mind the Gap: Evaluating LLMs for High-Level Malicious Package Detection vs. Fine-Grained Indicator Identification

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the capability of 13 large language models (LLMs) to detect malicious packages and their fine-grained malicious indicators within open-source software repositories. Using a dataset of 4,070 annotated PyPI packages, the authors conduct binary classification for package-level maliciousness detection and multi-label classification for identifying specific malicious behaviors, complemented by ablation studies on prompting strategies, temperature settings, and model scales. The results reveal that LLMs achieve strong performance at the package level—e.g., GPT-4.1 attains an F1 score of 0.99—but exhibit a significant performance drop (approximately 41%) in fine-grained indicator recognition. The work identifies this discrepancy as the “granularity gap” and delineates the respective applicability boundaries between general-purpose and code-specialized LLMs in software security tasks.

Technology Category

Application Category

📝 Abstract
The prevalence of malicious packages in open-source repositories, such as PyPI, poses a critical threat to the software supply chain. While Large Language Models (LLMs) have emerged as a promising tool for automated security tasks, their effectiveness in detecting malicious packages and indicators remains underexplored. This paper presents a systematic evaluation of 13 LLMs for detecting malicious software packages. Using a curated dataset of 4,070 packages (3,700 benign and 370 malicious), we evaluate model performance across two tasks: binary classification (package detection) and multi-label classification (identification of specific malicious indicators). We further investigate the impact of prompting strategies, temperature settings, and model specifications on detection accuracy. We find a significant "granularity gap" in LLMs' capabilities. While GPT-4.1 achieves near-perfect performance in binary detection (F1 $\approx$ 0.99), performance degrades by approximately 41\% when the task shifts to identifying specific malicious indicators. We observe that general models are best for filtering out the majority of threats, while specialized coder models are better at detecting attacks that follow a strict, predictable code structure. Our correlation analysis indicates that parameter size and context width have negligible explanatory power regarding detection accuracy. We conclude that while LLMs are powerful detectors at the package level, they lack the semantic depth required for precise identification at the granular indicator level.
Problem

Research questions and friction points this paper is trying to address.

malicious package detection
fine-grained indicator identification
large language models
software supply chain security
granularity gap
Innovation

Methods, ideas, or system contributions that make the work stand out.

granularity gap
malicious package detection
large language models
fine-grained indicator identification
software supply chain security
🔎 Similar Papers
No similar papers found.