🤖 AI Summary
Existing LLM-based approaches struggle to detect highly obfuscated, asynchronous malicious packages in the npm ecosystem due to context window limitations and prohibitive computational overhead. To address this, we propose the first taint-analysis-enhanced code slicing framework tailored to JavaScript’s event-driven nature. Our method constructs a dynamic dependency graph and introduces a heuristic asynchronous backtracking mechanism to precisely capture malicious data flows across callbacks and Promises, generating semantically aware, minimal code slices. These slices reduce average code volume by over 99%, enabling efficient ingestion by LLMs—achieving 87.04% detection accuracy with DeepSeek-Coder-6.7B, significantly outperforming baselines. Our core contribution lies in overcoming the semantic loss inherent in conventional static analysis under asynchrony, thereby enabling lightweight, high-fidelity, security-semantics-enhanced detection of malicious npm packages.
📝 Abstract
The increasing sophistication of malware attacks in the npm ecosystem, characterized by obfuscation and complex logic, necessitates advanced detection methods. Recently, researchers have turned their attention from traditional detection approaches to Large Language Models (LLMs) due to their strong capabilities in semantic code understanding. However, while LLMs offer superior semantic reasoning for code analysis, their practical application is constrained by limited context windows and high computational cost. This paper addresses this challenge by introducing a novel framework that leverages code slicing techniques for an LLM-based malicious package detection task. We propose a specialized taintbased slicing technique for npm packages, augmented by a heuristic backtracking mechanism to accurately capture malicious data flows across asynchronous, event-driven patterns (e.g., callbacks and Promises) that elude traditional analysis. An evaluation on a dataset of more than 5000 malicious and benign npm packages demonstrates that our approach isolates security-relevant code, reducing input volume by over 99% while preserving critical behavioral semantics. Using the DeepSeek-Coder-6.7B model as the classification engine, our approach achieves a detection accuracy of 87.04%, substantially outperforming a naive token-splitting baseline (75.41%) and a traditional static-analysis-based approach. These results indicate that semantically optimized input representation via code slicing not only mitigates the LLM context-window bottleneck but also significantly enhances reasoning precision for security tasks, providing an efficient and effective defense against evolving malicious open-source packages.