POROver: Improving Safety and Reducing Overrefusal in Large Language Models with Overgeneration and Preference Optimization

📅 2024-10-16
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the safety-usefulness trade-off in large language models—specifically, excessive refusal of benign prompts (high over-refusal rate) or insufficient rejection of harmful content—this paper proposes a synergistic framework integrating overgeneration with preference optimization. We innovatively apply overgeneration differentially based on prompt type (general vs. harmful) and construct fine-grained preference data using high-quality teacher models (e.g., GPT-4o). This data is leveraged for joint instruction tuning and preference-based optimization via DPO or RLHF variants. The method achieves Pareto-improved safety and usability: the F1 score—a balanced metric for safety and helpfulness—rises from 70.8% to 88.3%, while the over-refusal rate drops dramatically from 94.4% to 15.0%. These results demonstrate substantial gains in both robust harm rejection and consistent responsiveness to legitimate user requests, without compromising either objective.

Technology Category

Application Category

📝 Abstract
Balancing safety and usefulness in large language models has become a critical challenge in recent years. Models often exhibit unsafe behavior or adopt an overly cautious approach, leading to frequent overrefusal of benign prompts, which reduces their usefulness. Addressing these issues requires methods that maintain safety while avoiding overrefusal. In this work, we examine how the overgeneration of training data using advanced teacher models (e.g., GPT-4o), including responses to both general-purpose and toxic prompts, influences the safety and overrefusal balance of instruction-following language models. Additionally, we present POROver, a strategy to use preference optimization methods in order to reduce overrefusal, via employing a superior teacher model's completions. Our results show that overgenerating completions for general-purpose prompts significantly improves the balance between safety and usefulness. Specifically, the F1 score calculated between safety and usefulness increases from 70.8% to 88.3%. Moreover, overgeneration for toxic prompts substantially reduces overrefusal, decreasing it from 94.4% to 45.2%. Furthermore, preference optimization algorithms, when applied with carefully curated preference data, can effectively reduce a model's overrefusal from 45.2% to 15.0% while maintaining comparable safety levels. Our code and data are available at https://github.com/batuhankmkaraman/POROver.
Problem

Research questions and friction points this paper is trying to address.

Balancing safety and usefulness in large language models
Reducing overrefusal of benign prompts in aligned models
Improving model alignment with overgenerated data and optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Overgenerating finetuning data with advanced teacher models
POROver alignment strategy reduces overrefusals
Preference optimization leverages teacher model completions
🔎 Similar Papers
No similar papers found.