Jailbreaking Large Language Models Through Content Concretization

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) remain vulnerable to jailbreaking attacks, undermining safety mechanisms and increasing risks of malicious content generation. To address this, we propose Content Concretization (CC), a two-stage, cross-layer collaborative framework: first, a low-safety-filter model generates an initial executable code draft; second, a high-capability LLM jointly optimizes both the original prompt and the draft to iteratively refine output while preserving semantic intent. CC systematically transforms abstract malicious queries into concrete, syntactically valid, and directly executable code—thereby substantially improving attack feasibility. Evaluated on 350 cybersecurity-oriented prompts, CC achieves a jailbreak success rate of 62%, up from 7% with baseline methods, at an average cost of only $0.075 per attempt; generated code executes without modification in most cases. This work introduces, for the first time, the “semantic abstraction → code concretization” paradigm to jailbreaking research, offering a novel analytical lens and a practical benchmark for evaluating and strengthening LLM safety mechanisms.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly deployed for task automation and content generation, yet their safety mechanisms remain vulnerable to circumvention through different jailbreaking techniques. In this paper, we introduce extit{Content Concretization} (CC), a novel jailbreaking technique that iteratively transforms abstract malicious requests into concrete, executable implementations. CC is a two-stage process: first, generating initial LLM responses using lower-tier, less constrained safety filters models, then refining them through higher-tier models that process both the preliminary output and original prompt. We evaluate our technique using 350 cybersecurity-specific prompts, demonstrating substantial improvements in jailbreak Success Rates (SRs), increasing from 7% (no refinements) to 62% after three refinement iterations, while maintaining a cost of 7.5 extcent~per prompt. Comparative A/B testing across nine different LLM evaluators confirms that outputs from additional refinement steps are consistently rated as more malicious and technically superior. Moreover, manual code analysis reveals that generated outputs execute with minimal modification, although optimal deployment typically requires target-specific fine-tuning. With eventual improved harmful code generation, these results highlight critical vulnerabilities in current LLM safety frameworks.
Problem

Research questions and friction points this paper is trying to address.

Circumventing LLM safety mechanisms through iterative concretization
Transforming abstract malicious requests into executable implementations
Improving jailbreak success rates while maintaining low costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative transformation of abstract malicious requests
Two-stage process with lower and higher-tier models
Refinement iterations significantly increase success rates
🔎 Similar Papers
No similar papers found.
J
Johan Wahréus
KTH Royal Institute of Technology, Stockholm, Sweden
A
Ahmed Hussain
KTH Royal Institute of Technology, Stockholm, Sweden
Panos Papadimitratos
Panos Papadimitratos
KTH (Royal Institute of Technology)
SecurityPrivacyNetworkingWireless communications