🤖 AI Summary
This study addresses the dual threats posed by AI-powered malware leveraging cloud-hosted large language models (LLMs) for reconnaissance and code generation, and the risk of sensitive enterprise data leakage through document uploads. Recognizing that existing defenses lack effective monitoring once data enters an LLM, this work proposes a passive, format-agnostic detection framework based on steganographic watermarking via decoy files. By embedding cryptographic identifiers into documents—using a hybrid approach combining symbolic steganography (e.g., whitespace substitution, zero-width character insertion, homoglyphs) and linguistic steganography via GPT-2 arithmetic coding—and extracting them during LLM preprocessing, the method enables source attribution and attack interruption without semantic analysis. A novel four-layer transmission-transformation threat model evaluates robustness against AI malware. Experiments demonstrate 100% identifier recovery under benign and sanitization conditions, with hybrid mode B maintaining 97% recovery under adversarial transformations, successfully disrupting end-to-end LLM-based ransomware attack chains.
📝 Abstract
AI-powered malware increasingly exploits cloud-hosted generative-AI services and large language models (LLMs) as analysis engines for reconnaissance and code generation. Simultaneously, enterprise uploads expose sensitive documents to third-party AI vendors. Both threats converge at the AI service ingestion boundary, yet existing defenses focus on endpoints and network perimeters, leaving organizations with limited visibility once plaintext reaches an LLM service. To address this, we present a framework based on steganographic canary files: realistic documents carrying cryptographically derived identifiers embedded via complementary encoding channels. A pre-ingestion filter extracts and verifies these identifiers before LLM processing, enabling passive, format-agnostic detection without semantic classification. We support two modes of operation where Mode A marks existing sensitive documents with layered symbolic encodings (whitespace substitution, zero-width character insertion, homoglyph substitution), while Mode B generates synthetic canary documents using linguistic steganography (arithmetic coding over GPT-2), augmented with compatible symbolic layers. We model increasing document pre-processing and adversarial capability for both modes via a four-tier transport-transform taxonomy: All methods achieve 100% identifier recovery under benign and sanitization workflows (Tiers 1-2). The hybrid Mode B maintains 97% through targeted adversarial transforms (Tier 3). An end-to-end case study against an LLM-orchestrated ransomware pipeline confirms that both modes detect and block canary-bearing uploads before file encryption begins. To our knowledge, this is the first framework to systematically combine symbolic and linguistic text steganography into layered canary documents for detecting unauthorized LLM processing, evaluated against a transport-threat taxonomy tailored to AI malware.