Embedding Poisoning: Bypassing Safety Alignment via Embedding Semantic Shift

📅 2025-09-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel deployment-stage attack exploiting insufficient deep validation of embedding layers on public platforms (e.g., Hugging Face). Without modifying model weights or input text, the attack injects statistically benign but semantically shifted perturbations directly into embedding layer outputs. It employs a search-driven optimization to precisely identify the critical perturbation interval triggering transitions from “refusal” to “harmful” responses, and leverages a model-agnostic embedding injection technique to systematically bypass alignment mechanisms. Evaluated on six mainstream aligned LLMs, the attack achieves an average success rate of 96.43% while evading existing detection methods. Crucially, this study is the first to demonstrate that semantic drift in high-risk token embeddings can be actively exploited—highlighting the critical importance of embedding-layer integrity verification in LLM deployment security. The findings provide concrete design principles for next-generation defense frameworks targeting embedding-layer vulnerabilities.

Technology Category

Application Category

📝 Abstract
The widespread distribution of Large Language Models (LLMs) through public platforms like Hugging Face introduces significant security challenges. While these platforms perform basic security scans, they often fail to detect subtle manipulations within the embedding layer. This work identifies a novel class of deployment phase attacks that exploit this vulnerability by injecting imperceptible perturbations directly into the embedding layer outputs without modifying model weights or input text. These perturbations, though statistically benign, systematically bypass safety alignment mechanisms and induce harmful behaviors during inference. We propose Search based Embedding Poisoning(SEP), a practical, model agnostic framework that introduces carefully optimized perturbations into embeddings associated with high risk tokens. SEP leverages a predictable linear transition in model responses, from refusal to harmful output to semantic deviation to identify a narrow perturbation window that evades alignment safeguards. Evaluated across six aligned LLMs, SEP achieves an average attack success rate of 96.43% while preserving benign task performance and evading conventional detection mechanisms. Our findings reveal a critical oversight in deployment security and emphasize the urgent need for embedding level integrity checks in future LLM defense strategies.
Problem

Research questions and friction points this paper is trying to address.

Bypassing safety alignment via embedding semantic shift
Injecting imperceptible perturbations into embedding layers
Inducing harmful behaviors while evading detection mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding layer perturbation injection
Search-based optimization for evasion
Model-agnostic semantic shift attack
🔎 Similar Papers
No similar papers found.