🤖 AI Summary
This work addresses the high computational and memory costs incurred by suffix-based jailbreaking attacks when evaluating numerous candidate suffixes. We propose PSKV, a plug-and-play inference optimization method that, for the first time, introduces prefix-shared KV caching into the jailbreaking context. By exploiting the fact that harmful instruction prefixes are shared across all candidate prompts, PSKV reuses their precomputed KV cache to enable parallel inference over multiple suffixes. Integrating KV cache reuse, batched parallel processing, and red-teaming techniques within the Transformer architecture, our approach reduces inference time by 40% and cuts peak memory consumption by 50% across five mainstream large language models and six suffix-based attack variants, while preserving attack success rates.
📝 Abstract
Suffix jailbreak attacks serve as a systematic method for red-teaming Large Language Models (LLMs) but suffer from prohibitive computational costs, as a large number of candidate suffixes need to be evaluated before identifying a jailbreak suffix. This paper presents Prefix-Shared KV Cache (PSKV), a plug-and-play inference optimization technique tailored for jailbreak suffix generation. Our method is motivated by a key observation that when performing suffix jailbreaking, while a large number of candidate prompts need to be evaluated, they share the same targeted harmful instruction as the prefix. Therefore, instead of performing redundant inference on the duplicated prefix, PSKV maintains a single KV cache for this prefix and shares it with every candidate prompt, enabling the parallel inference of diverse suffixes with minimal memory overhead. This design enables more aggressive batching strategies that would otherwise be limited by memory constraints. Extensive experiments on six widely used suffix attacks across five widely deployed LLMs demonstrate that PSKV reduces inference time by 40\% and peak memory usage by 50\%, while maintaining the original Attack Success Rate (ASR). The code has been submitted and will be released publicly.