🤖 AI Summary
Logit-level jailbreaking attacks—such as direct token selection manipulation during generation to evade refusal-based defenses—pose a critical security threat to large language models (LLMs). To address this, we propose an active defense framework centered on strategic content redirection: instead of defaulting to refusal, the framework performs real-time logit-layer monitoring and semantics-aware regulation to dynamically steer model outputs toward semantically proximal yet harmless responses. Our approach integrates adversarial decoding with lightweight semantic calibration, requiring neither model fine-tuning nor architectural modification. Extensive experiments demonstrate that it significantly reduces attack success rates across diverse strong jailbreaking methods (average reduction of 72.4%), while preserving original task performance with negligible accuracy degradation (<0.5%). This achieves a robust trade-off between security and usability.
📝 Abstract
With the growing adoption of Large Language Models (LLMs) in critical areas, ensuring their security against jailbreaking attacks is paramount. While traditional defenses primarily rely on refusing malicious prompts, recent logit-level attacks have demonstrated the ability to bypass these safeguards by directly manipulating the token-selection process during generation. We introduce Strategic Deflection (SDeflection), a defense that redefines the LLM's response to such advanced attacks. Instead of outright refusal, the model produces an answer that is semantically adjacent to the user's request yet strips away the harmful intent, thereby neutralizing the attacker's harmful intent. Our experiments demonstrate that SDeflection significantly lowers Attack Success Rate (ASR) while maintaining model performance on benign queries. This work presents a critical shift in defensive strategies, moving from simple refusal to strategic content redirection to neutralize advanced threats.