🤖 AI Summary
Despite alignment, black-box large language models (LLMs) remain vulnerable to jailbreaking attacks. Method: This paper proposes an iterative semantic tuning method with low query overhead, requiring no gradient information or internal model access. It operates within a black-box interaction paradigm and integrates semantic similarity constraints with iterative optimization. Contribution/Results: The method introduces (i) a novel sequential synonym search strategy that balances semantic fidelity and exploration of the discrete prompt space, and (ii) an order-dependent optimization mechanism that dynamically refines prompt structure under limited API queries. Evaluated on two open-source models (Llama-3, Qwen2) and four closed-source models (GPT-4o, Claude-3.5, etc.), it achieves state-of-the-art jailbreak success rates, significantly improves cross-model transferability, and reduces average query cost by 62%, demonstrating both efficiency and practical viability.
📝 Abstract
Despite efforts to align large language models (LLMs) with societal and moral values, these models remain susceptible to jailbreak attacks--methods designed to elicit harmful responses. Jailbreaking black-box LLMs is considered challenging due to the discrete nature of token inputs, restricted access to the target LLM, and limited query budget. To address the issues above, we propose an effective method for jailbreaking black-box large language Models via Iterative Semantic Tuning, named MIST. MIST enables attackers to iteratively refine prompts that preserve the original semantic intent while inducing harmful content. Specifically, to balance semantic similarity with computational efficiency, MIST incorporates two key strategies: sequential synonym search, and its advanced version--order-determining optimization. Extensive experiments across two open-source models and four closed-source models demonstrate that MIST achieves competitive attack success rates and attack transferability compared with other state-of-the-art white-box and black-box jailbreak methods. Additionally, we conduct experiments on computational efficiency to validate the practical viability of MIST.