Jailbreaking Large Language Models through Iterative Tool-Disguised Attacks via Reinforcement Learning

๐Ÿ“… 2026-01-09
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current safety mechanisms in large language models struggle to defend against adversarial attacks disguised as legitimate tool calls, potentially leading to the generation of harmful content. This work proposes iMIST, a novel approach that, for the first time, integrates tool-call masquerading with reinforcement learningโ€“based multi-turn progressive optimization to dynamically amplify harmful outputs during interactive sessions while evading content filters. Evaluated on mainstream large language models, iMIST significantly increases attack success rates while maintaining a low request rejection rate. The method exposes critical vulnerabilities in existing defense frameworks and highlights a blind spot in security design concerning tool-calling interfaces.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) have demonstrated remarkable capabilities across diverse applications, however, they remain critically vulnerable to jailbreak attacks that elicit harmful responses violating human values and safety guidelines. Despite extensive research on defense mechanisms, existing safeguards prove insufficient against sophisticated adversarial strategies. In this work, we propose iMIST (\underline{i}nteractive \underline{M}ulti-step \underline{P}rogre\underline{s}sive \underline{T}ool-disguised Jailbreak Attack), a novel adaptive jailbreak method that synergistically exploits vulnerabilities in current defense mechanisms. iMIST disguises malicious queries as normal tool invocations to bypass content filters, while simultaneously introducing an interactive progressive optimization algorithm that dynamically escalates response harmfulness through multi-turn dialogues guided by real-time harmfulness assessment. Our experiments on widely-used models demonstrate that iMIST achieves higher attack effectiveness, while maintaining low rejection rates. These results reveal critical vulnerabilities in current LLM safety mechanisms and underscore the urgent need for more robust defense strategies.
Problem

Research questions and friction points this paper is trying to address.

jailbreak attacks
large language models
safety mechanisms
adversarial strategies
harmful responses
Innovation

Methods, ideas, or system contributions that make the work stand out.

jailbreak attack
tool-disguised
reinforcement learning
interactive optimization
LLM safety
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Zhaoqi Wang
School of Cyberspace Science and Technology, Beijing Institute of Technology
Zijian Zhang
Zijian Zhang
Beijing Institute of Technology
AI SecurityBlockchain SystemsTor NetworksData Privacy
D
Daqing He
School of Cyberspace Science and Technology, Beijing Institute of Technology
P
Pengtao Kou
School of Cyberspace Science and Technology, Beijing Institute of Technology
X
Xin Li
School of Computer Science and Technology, Beijing Institute of Technology
Jiamou Liu
Jiamou Liu
The University of Auckland
Social NetworksArtificial IntelligenceMachine Learning
J
Jincheng An
QAX Security Center, Qi-AnXin Technology Group Inc.
Y
Yong Liu
Qi-AnXin Technology Group Inc. and Zhongguancun Laboratory