🤖 AI Summary
Large language models (LLMs) frequently exhibit excessive refusal of benign queries during safety alignment, compromising practical utility. Method: To address this, we introduce the first structured dataset—comprising 16K ostensibly harmful yet actually benign queries—spanning 44 sensitive categories. We propose a graph-guided adversarial multi-agent prompting framework that integrates explicit structured reasoning with dual-path training (instruction fine-tuning and inference-time enhancement), and release a human-annotated evaluation benchmark. Contribution/Results: Evaluated across 29 state-of-the-art models, our approach significantly reduces over-refusal rates while preserving both safety performance and general capabilities. This work delivers a reproducible benchmark, a novel training paradigm, and critical open resources—advancing the co-optimization of safety and utility in LLMs.
📝 Abstract
Safety alignment approaches in large language models (LLMs) often lead to the over-refusal of benign queries, significantly diminishing their utility in sensitive scenarios. To address this challenge, we introduce FalseReject, a comprehensive resource containing 16k seemingly toxic queries accompanied by structured responses across 44 safety-related categories. We propose a graph-informed adversarial multi-agent interaction framework to generate diverse and complex prompts, while structuring responses with explicit reasoning to aid models in accurately distinguishing safe from unsafe contexts. FalseReject includes training datasets tailored for both standard instruction-tuned models and reasoning-oriented models, as well as a human-annotated benchmark test set. Our extensive benchmarking on 29 state-of-the-art (SOTA) LLMs reveals persistent over-refusal challenges. Empirical results demonstrate that supervised finetuning with FalseReject substantially reduces unnecessary refusals without compromising overall safety or general language capabilities.