VERA: Variational Inference Framework for Jailbreaking Large Language Models

📅 2025-06-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing black-box jailbreaking methods predominantly rely on genetic algorithms, which suffer from sensitivity to initialization quality and dependence on handcrafted prompt pools, while requiring per-prompt optimization—hindering systematic characterization of LLM vulnerabilities. Method: This paper pioneers modeling adversarial prompt generation as a variational inference problem, training a lightweight attacker model to approximate the posterior distribution over jailbreaking prompts for the target LLM—enabling efficient, diverse prompt generation without re-optimization. Contribution/Results: Our approach eliminates reliance on initial prompts or fixed prompt pools, offering a probabilistic, systematic characterization of model fragility. Evaluated across multiple mainstream LLMs, it generates fluent, highly diverse prompts and achieves significantly higher jailbreaking success rates, demonstrating the effectiveness and generalizability of the probabilistic inference paradigm for black-box adversarial prompting.

Technology Category

Application Category

📝 Abstract
The rise of API-only access to state-of-the-art LLMs highlights the need for effective black-box jailbreak methods to identify model vulnerabilities in real-world settings. Without a principled objective for gradient-based optimization, most existing approaches rely on genetic algorithms, which are limited by their initialization and dependence on manually curated prompt pools. Furthermore, these methods require individual optimization for each prompt, failing to provide a comprehensive characterization of model vulnerabilities. To address this gap, we introduce VERA: Variational infErence fRamework for jAilbreaking. VERA casts black-box jailbreak prompting as a variational inference problem, training a small attacker LLM to approximate the target LLM's posterior over adversarial prompts. Once trained, the attacker can generate diverse, fluent jailbreak prompts for a target query without re-optimization. Experimental results show that VERA achieves strong performance across a range of target LLMs, highlighting the value of probabilistic inference for adversarial prompt generation.
Problem

Research questions and friction points this paper is trying to address.

Develops black-box jailbreak methods for API-only LLMs
Addresses limitations of genetic algorithms in prompt optimization
Provides comprehensive model vulnerability characterization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Variational inference for jailbreak prompting
Attacker LLM approximates target posterior
Generates diverse prompts without re-optimization
🔎 Similar Papers
No similar papers found.