🤖 AI Summary
Large language models (LLMs) remain vulnerable to jailbreaking attacks, undermining safety mechanisms and increasing risks of malicious content generation. To address this, we propose Content Concretization (CC), a two-stage, cross-layer collaborative framework: first, a low-safety-filter model generates an initial executable code draft; second, a high-capability LLM jointly optimizes both the original prompt and the draft to iteratively refine output while preserving semantic intent. CC systematically transforms abstract malicious queries into concrete, syntactically valid, and directly executable code—thereby substantially improving attack feasibility. Evaluated on 350 cybersecurity-oriented prompts, CC achieves a jailbreak success rate of 62%, up from 7% with baseline methods, at an average cost of only $0.075 per attempt; generated code executes without modification in most cases. This work introduces, for the first time, the “semantic abstraction → code concretization” paradigm to jailbreaking research, offering a novel analytical lens and a practical benchmark for evaluating and strengthening LLM safety mechanisms.
📝 Abstract
Large Language Models (LLMs) are increasingly deployed for task automation and content generation, yet their safety mechanisms remain vulnerable to circumvention through different jailbreaking techniques. In this paper, we introduce extit{Content Concretization} (CC), a novel jailbreaking technique that iteratively transforms abstract malicious requests into concrete, executable implementations. CC is a two-stage process: first, generating initial LLM responses using lower-tier, less constrained safety filters models, then refining them through higher-tier models that process both the preliminary output and original prompt. We evaluate our technique using 350 cybersecurity-specific prompts, demonstrating substantial improvements in jailbreak Success Rates (SRs), increasing from 7% (no refinements) to 62% after three refinement iterations, while maintaining a cost of 7.5 extcent~per prompt. Comparative A/B testing across nine different LLM evaluators confirms that outputs from additional refinement steps are consistently rated as more malicious and technically superior. Moreover, manual code analysis reveals that generated outputs execute with minimal modification, although optimal deployment typically requires target-specific fine-tuning. With eventual improved harmful code generation, these results highlight critical vulnerabilities in current LLM safety frameworks.