Adversarial Contrastive Decoding: Boosting Safety Alignment of Large Language Models via Opposite Prompt Optimization

📅 2024-06-24
🏛️ arXiv.org
📈 Citations: 8
Influential: 1
📄 PDF
🤖 AI Summary
To address bottlenecks in LLM safety alignment—namely, reliance on high-quality annotated data, computationally intensive fine-tuning, and manually engineered prompt templates—this paper proposes a training-free, lightweight decoding-time safeguard. The method constructs semantically adversarial system prompt pairs and employs an adversarial prompt optimization framework to automatically discover optimal opposing prompts. It then performs contrastive decoding and logit-space reweighting to suppress harmful outputs. Crucially, it requires only minimal anchor data (under 3 minutes per model), incurs zero parameter updates to the target model, and eliminates dependence on handcrafted templates or model retraining. Evaluated across multiple LLMs and safety benchmarks, the approach significantly outperforms existing training-free methods, substantially reducing harmful response rates while preserving generation quality and model capabilities intact.

Technology Category

Application Category

📝 Abstract
With the widespread application of Large Language Models (LLMs), it has become a significant concern to ensure their safety and prevent harmful responses. While current safe-alignment methods based on instruction fine-tuning and Reinforcement Learning from Human Feedback (RLHF) can effectively reduce harmful responses from LLMs, they often require high-quality datasets and heavy computational overhead during model training. Another way to align language models is to modify the logit of tokens in model outputs without heavy training. Recent studies have shown that contrastive decoding can enhance the performance of language models by reducing the likelihood of confused tokens. However, these methods require the manual selection of contrastive models or instruction templates. To this end, we propose Adversarial Contrastive Decoding (ACD), an optimization-based framework to generate two opposite system prompts for prompt-based contrastive decoding. ACD only needs to apply a lightweight prompt tuning on a rather small anchor dataset (<3 min for each model) without training the target model. Experiments conducted on extensive models and benchmarks demonstrate that the proposed method achieves much better safety performance than previous model training-free decoding methods without sacrificing its original generation ability.
Problem

Research questions and friction points this paper is trying to address.

Aligning large language models with safety without heavy training overhead
Reducing harmful responses while preserving original generation capabilities
Overcoming manual contrastive model selection limitations in safety alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimization-based framework for adversarial contrastive decoding
Generates opposing soft prompts for safety alignment
Lightweight prompt tuning without training target model
🔎 Similar Papers
No similar papers found.