🤖 AI Summary
This work addresses the vulnerability of safety alignment mechanisms in large language models to jailbreaking attacks, a challenge where existing defenses struggle to balance efficacy and efficiency. The authors propose Contextual Representation Ablation (CRA), a novel framework that, for the first time, reveals how refusal behaviors are mediated by specific low-rank subspaces within the model’s representation space. By leveraging this geometric insight, CRA dynamically identifies and suppresses activation in these subspaces during inference—without requiring any parameter updates—to precisely circumvent safety constraints. Evaluated across multiple open-source aligned models, CRA significantly outperforms current baselines, demonstrating both its effectiveness and the inherent fragility of prevailing alignment strategies.
📝 Abstract
While Large Language Models (LLMs) have achieved remarkable performance, they remain vulnerable to jailbreak attacks that circumvent safety constraints. Existing strategies, ranging from heuristic prompt engineering to computationally intensive optimization, often face significant trade-offs between effectiveness and efficiency. In this work, we propose Contextual Representation Ablation (CRA), a novel inference-time intervention framework designed to dynamically silence model guardrails. Predicated on the geometric insight that refusal behaviors are mediated by specific low-rank subspaces within the model's hidden states, CRA identifies and suppresses these refusal-inducing activation patterns during decoding without requiring expensive parameter updates or training. Empirical evaluation across multiple safety-aligned open-source LLMs demonstrates that CRA significantly outperforms baselines. These results expose the intrinsic fragility of current alignment mechanisms, revealing that safety constraints can be surgically ablated from internal representations, and underscore the urgent need for more robust defenses that secure the model's latent space.