MIST: Jailbreaking Black-box Large Language Models via Iterative Semantic Tuning

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite alignment, black-box large language models (LLMs) remain vulnerable to jailbreaking attacks. Method: This paper proposes an iterative semantic tuning method with low query overhead, requiring no gradient information or internal model access. It operates within a black-box interaction paradigm and integrates semantic similarity constraints with iterative optimization. Contribution/Results: The method introduces (i) a novel sequential synonym search strategy that balances semantic fidelity and exploration of the discrete prompt space, and (ii) an order-dependent optimization mechanism that dynamically refines prompt structure under limited API queries. Evaluated on two open-source models (Llama-3, Qwen2) and four closed-source models (GPT-4o, Claude-3.5, etc.), it achieves state-of-the-art jailbreak success rates, significantly improves cross-model transferability, and reduces average query cost by 62%, demonstrating both efficiency and practical viability.

Technology Category

Application Category

📝 Abstract
Despite efforts to align large language models (LLMs) with societal and moral values, these models remain susceptible to jailbreak attacks--methods designed to elicit harmful responses. Jailbreaking black-box LLMs is considered challenging due to the discrete nature of token inputs, restricted access to the target LLM, and limited query budget. To address the issues above, we propose an effective method for jailbreaking black-box large language Models via Iterative Semantic Tuning, named MIST. MIST enables attackers to iteratively refine prompts that preserve the original semantic intent while inducing harmful content. Specifically, to balance semantic similarity with computational efficiency, MIST incorporates two key strategies: sequential synonym search, and its advanced version--order-determining optimization. Extensive experiments across two open-source models and four closed-source models demonstrate that MIST achieves competitive attack success rates and attack transferability compared with other state-of-the-art white-box and black-box jailbreak methods. Additionally, we conduct experiments on computational efficiency to validate the practical viability of MIST.
Problem

Research questions and friction points this paper is trying to address.

Jailbreaking black-box LLMs despite discrete token inputs and restricted access
Balancing semantic similarity with computational efficiency in prompt refinement
Achieving high attack success rates and transferability across multiple LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative semantic tuning for jailbreaking
Sequential synonym search optimization
Order-determining optimization strategy
🔎 Similar Papers
No similar papers found.
M
Muyang Zheng
School of Computer Science and Information Engineering, Hefei University of Technology
Yuanzhi Yao
Yuanzhi Yao
Hefei University of Technology
Artificial IntelligenceData HidingVideo Coding
Changting Lin
Changting Lin
Zhejiang University
Computer Science
R
Rui Wang
School of Computer Science, Nanjing University of Posts and Telecommunications
Meng Han
Meng Han
Intelligence Fusion Research Center (IFRC)
Reliable AIData MiningMachine LearningBig DataSecurity&Privacy