🤖 AI Summary
This paper identifies an emerging jailbreaking threat—BiasJailbreak—where embedded ethical biases (e.g., gender, racial preferences) in large language models (LLMs) like GPT-4o can be maliciously exploited to circumvent safety alignment. It formally defines this phenomenon, demonstrating that alignment procedures themselves may inadvertently induce latent, triggerable systematic biases. Method: The authors propose a black-box jailbreaking strategy that first extracts bias-inducing keywords via self-generated prompts from the target model, then applies adversarial prompt engineering to activate these biases. Contribution/Results: Empirical evaluation on GPT-4o shows up to 20% disparity in response success rates across identity dimensions (e.g., non-binary vs. cisgender, White vs. Black). To counter this, they introduce BiasDefense—a lightweight, zero-overhead pre-defense mechanism that injects robust mitigating prompts at inference time. All code and datasets are publicly released.
📝 Abstract
Although large language models (LLMs) demonstrate impressive proficiency in various tasks, they present potential safety risks, such as `jailbreaks', where malicious inputs can coerce LLMs into generating harmful content bypassing safety alignments. In this paper, we delve into the ethical biases in LLMs and examine how those biases could be exploited for jailbreaks. Notably, these biases result in a jailbreaking success rate in GPT-4o models that differs by 20% between non-binary and cisgender keywords and by 16% between white and black keywords, even when the other parts of the prompts are identical. We introduce the concept of BiasJailbreak, highlighting the inherent risks posed by these safety-induced biases. BiasJailbreak generates biased keywords automatically by asking the target LLM itself, and utilizes the keywords to generate harmful output. Additionally, we propose an efficient defense method BiasDefense, which prevents jailbreak attempts by injecting defense prompts prior to generation. BiasDefense stands as an appealing alternative to Guard Models, such as Llama-Guard, that require additional inference cost after text generation. Our findings emphasize that ethical biases in LLMs can actually lead to generating unsafe output, and suggest a method to make the LLMs more secure and unbiased. To enable further research and improvements, we open-source our code and artifacts of BiasJailbreak, providing the community with tools to better understand and mitigate safety-induced biases in LLMs.