🤖 AI Summary
Existing safety alignment methods improve overall safety but exhibit persistent vulnerabilities in specific categories and often induce over-alignment, leading to excessive refusal—thereby compromising model helpfulness. Method: We identify, for the first time, that safety alignment induces negative token-level preferences; based on this insight, we propose Token-level Safety Debiasing Inference (TSDI), a training-free, token-level debiasing framework. TSDI estimates token-level bias via randomized prompting and dynamically corrects it during inference, integrating prompt-based debiasing with Pareto-front optimization. Contribution/Results: TSDI significantly enhances model helpfulness without degrading overall safety, shifting the safety–helpfulness Pareto frontier favorably and effectively mitigating indiscriminate refusal behavior.
📝 Abstract
Safety alignment is an essential research topic for real-world AI applications. Despite the multifaceted nature of safety and trustworthiness in AI, current safety alignment methods often focus on a comprehensive notion of safety. By carefully assessing models from the existing safety-alignment methods, we found that, while they generally improved overall safety performance, they failed to ensure safety in specific categories. Our study first identified the difficulty of eliminating such vulnerabilities without sacrificing the model's helpfulness. We observed that, while smaller KL penalty parameters, increased training iterations, and dataset cleansing can enhance safety, they do not necessarily improve the trade-off between safety and helpfulness. We discovered that safety alignment could even induce undesired effects and result in a model that prefers generating negative tokens leading to rejective responses, regardless of the input context. To address this, we introduced a learning-free method, Token-level Safety-Debiased Inference (TSDI), to estimate and correct this bias during the generation process using randomly constructed prompts. Our experiments demonstrated that our method could enhance the model's helpfulness while maintaining safety, thus improving the trade-off Pareto-front.