🤖 AI Summary
Large language models (LLMs) are vulnerable to jailbreaking attacks that elicit harmful outputs. This paper proposes a novel token-level jailbreaking attack method. Methodologically, it integrates gradient-based token embedding optimization within a structured constraint framework. Its core contributions are threefold: (1) introducing a continuous relaxation mechanism to bridge the gap between discrete token optimization and continuous-space gradient updates; (2) designing an adaptive dense-to-sparse constraint optimization paradigm that dynamically regulates embedding sparsity while preserving semantic coherence; and (3) unifying gradient-driven embedding refinement with structural regularization. Evaluated on the HarmBench benchmark, the method achieves the highest attack success rate among existing token-level approaches across 7 of 8 mainstream open-weight LLMs, demonstrating significant improvements in both efficacy and robustness.
📝 Abstract
Recent research indicates that large language models (LLMs) are susceptible to jailbreaking attacks that can generate harmful content. This paper introduces a novel token-level attack method, Adaptive Dense-to-Sparse Constrained Optimization (ADC), which has been shown to successfully jailbreak multiple open-source LLMs. Drawing inspiration from the difficulties of discrete token optimization, our method relaxes the discrete jailbreak optimization into a continuous optimization process while gradually increasing the sparsity of the optimizing vectors. This technique effectively bridges the gap between discrete and continuous space optimization. Experimental results demonstrate that our method is more effective and efficient than state-of-the-art token-level methods. On Harmbench, our approach achieves the highest attack success rate on seven out of eight LLMs compared to the latest jailbreak methods. Trigger Warning: This paper contains model behavior that can be offensive in nature.