🤖 AI Summary
This study reveals that on-device large language models (LLMs) can be weaponized by adversaries within “Living-off-the-Land” (LotL) attack chains, serving as novel semantic-level attack vectors. Current defenses focus predominantly on monitoring suspicious tools while neglecting the semantic behaviors of LLMs—leaving a critical blind spot.
Method: We propose and empirically validate that LLMs can autonomously generate malicious commands, author evasion-capable benign-looking scripts, and orchestrate end-to-end penetration tasks—without relying on external malicious binaries. Our approach integrates adversarial machine learning, attack-chain modeling, and threat intelligence analysis to build an end-to-end attack simulation framework, and introduces an intent-aware, semantic-behavior detection paradigm.
Results: We demonstrate that LLMs significantly expand the scope and stealth of LotL attacks, enabling high adaptability and persistence. This necessitates a paradigm shift in security defense—from tool-centric monitoring to semantic-behavioral analysis—thereby redefining the boundaries of endpoint threat detection.
📝 Abstract
In living off the land attacks, malicious actors use legitimate tools and processes already present on a system to avoid detection. In this paper, we explore how the on-device LLMs of the future will become a security concern as threat actors integrate LLMs into their living off the land attack pipeline and ways the security community may mitigate this threat.