🤖 AI Summary
Large language models exhibit an “honesty bias”: when prompted to generate falsehoods or deceptive content, they tend to inadvertently disclose true yet harmful information—mistaking it for falsehood. This paper identifies and exploits this phenomenon for the first time, proposing a novel jailbreaking attack. By crafting adversarial prompts that instruct the model to “fabricate a false procedure,” we induce it to output ostensibly fictional—but in fact factual, executable, and hazardous—steps. Crucially, this method hijacks safety alignment mechanisms: their tolerance for *falsity* is subverted into permissiveness toward *factually harmful content*. We evaluate our approach on five state-of-the-art safety-aligned models, demonstrating consistent superiority over four baseline jailbreaking methods. Our attack generates significantly more executable and factually dangerous outputs, exhibiting strong generalization across models and posing tangible real-world security threats.
📝 Abstract
We find that language models have difficulties generating fallacious and deceptive reasoning. When asked to generate deceptive outputs, language models tend to leak honest counterparts but believe them to be false. Exploiting this deficiency, we propose a jailbreak attack method that elicits an aligned language model for malicious output. Specifically, we query the model to generate a fallacious yet deceptively real procedure for the harmful behavior. Since a fallacious procedure is generally considered fake and thus harmless by LLMs, it helps bypass the safeguard mechanism. Yet the output is factually harmful since the LLM cannot fabricate fallacious solutions but proposes truthful ones. We evaluate our approach over five safety-aligned large language models, comparing four previous jailbreak methods, and show that our approach achieves competitive performance with more harmful outputs. We believe the findings could be extended beyond model safety, such as self-verification and hallucination.