Cascade: Composing Software-Hardware Attack Gadgets for Adversarial Threat Amplification in Compound AI Systems

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the emerging security threats posed by Compound AI systems, which integrate large language models (LLMs), software tools, and hardware infrastructure, thereby intertwining conventional software/hardware vulnerabilities with AI-specific risks. Notably, prior research has overlooked how system-level flaws can amplify AI security failures. To bridge this gap, we propose the first composable, cross-layer attack framework that systematically combines diverse attack primitives—including software exploits (e.g., code injection), hardware-level attacks (e.g., Rowhammer-induced bit flips), knowledge base manipulation, and LLM jailbreaking prompts—to construct end-to-end attack chains. Our experiments demonstrate two novel compound attacks: one bypasses safety guardrails to elicit jailbroken LLM outputs, and another induces AI agents to leak sensitive user data. These findings reveal the severe threat that low-level system vulnerabilities pose to high-level AI security, establishing a foundation for system-aware AI security research.

Technology Category

Application Category

📝 Abstract
Rapid progress in generative AI has given rise to Compound AI systems - pipelines comprised of multiple large language models (LLM), software tools and database systems. Compound AI systems are constructed on a layered traditional software stack running on a distributed hardware infrastructure. Many of the diverse software components are vulnerable to traditional security flaws documented in the Common Vulnerabilities and Exposures (CVE) database, while the underlying distributed hardware infrastructure remains exposed to timing attacks, bit-flip faults, and power-based side channels. Today, research targets LLM-specific risks like model extraction, training data leakage, and unsafe generation -- overlooking the impact of traditional system vulnerabilities. This work investigates how traditional software and hardware vulnerabilities can complement LLM-specific algorithmic attacks to compromise the integrity of a compound AI pipeline. We demonstrate two novel attacks that combine system-level vulnerabilities with algorithmic weaknesses: (1) Exploiting a software code injection flaw along with a guardrail Rowhammer attack to inject an unaltered jailbreak prompt into an LLM, resulting in an AI safety violation, and (2) Manipulating a knowledge database to redirect an LLM agent to transmit sensitive user data to a malicious application, thus breaching confidentiality. These attacks highlight the need to address traditional vulnerabilities; we systematize the attack primitives and analyze their composition by grouping vulnerabilities by their objective and mapping them to distinct stages of an attack lifecycle. This approach enables a rigorous red-teaming exercise and lays the groundwork for future defense strategies.
Problem

Research questions and friction points this paper is trying to address.

Compound AI Systems
Software-Hardware Vulnerabilities
Adversarial Attacks
LLM Security
System-Level Threats
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compound AI Systems
Hardware-Software Co-Exploitation
Adversarial Threat Amplification
Rowhammer Attack
LLM Jailbreaking
🔎 Similar Papers
No similar papers found.