Steering in the Shadows: Causal Amplification for Activation Space Attacks in Large Language Models

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies high-gain regions in the residual stream of decoder-only large language models, where small, well-aligned activation perturbations undergo causal amplification along autoregressive trajectories—introducing a novel behavioral control attack surface applicable to both white-box and supply-chain threat models. To address this, the authors formally define Causal Amplification Effect (CAE) and propose Sensitivity-Weighted Stepwise Stimulation (SSS), a progressive activation intervention method leveraging BOS anchoring, attention settling, and compression valley analysis to precisely localize the most vulnerable layers and tokens. Experiments across multiple open-source models demonstrate that SSS reliably induces four classes of harmful behaviors—malicious content generation, hallucination, sycophancy, and sentiment shift—while preserving output quality and semantic coherence. This is the first work to systematically establish intermediate activation-space manipulation as a practical security threat.

Technology Category

Application Category

📝 Abstract
Modern large language models (LLMs) are typically secured by auditing data, prompts, and refusal policies, while treating the forward pass as an implementation detail. We show that intermediate activations in decoder-only LLMs form a vulnerable attack surface for behavioral control. Building on recent findings on attention sinks and compression valleys, we identify a high-gain region in the residual stream where small, well-aligned perturbations are causally amplified along the autoregressive trajectory--a Causal Amplification Effect (CAE). We exploit this as an attack surface via Sensitivity-Scaled Steering (SSS), a progressive activation-level attack that combines beginning-of-sequence (BOS) anchoring with sensitivity-based reinforcement to focus a limited perturbation budget on the most vulnerable layers and tokens. We show that across multiple open-weight models and four behavioral axes, SSS induces large shifts in evil, hallucination, sycophancy, and sentiment while preserving high coherence and general capabilities, turning activation steering into a concrete security concern for white-box and supply-chain LLM deployments.
Problem

Research questions and friction points this paper is trying to address.

Attacking behavioral control through intermediate activations in LLMs
Exploiting causal amplification effects in residual stream perturbations
Inducing behavioral shifts while preserving model coherence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal amplification effect in residual stream
Sensitivity-scaled steering activation attack
BOS anchoring with sensitivity-based reinforcement
🔎 Similar Papers
No similar papers found.