🤖 AI Summary
This work identifies selective refusal bias in large language model (LLM) safety mechanisms: systematic disparities exist across demographic groups—by gender, sexual orientation, nationality, and religion—in refusal rates, response types, and refusal text length when generating harmful content, resulting in inadequate protection for marginalized populations. We introduce the first systematic characterization of intersectional demographic bias in safety refusals, proposing an evaluation framework grounded in targeted prompting and indirect adversarial attacks. Our methodology integrates refusal-rate analysis, response categorization, and statistical examination of refusal-length distributions. Empirical evaluation across mainstream LLMs reveals pervasive and statistically significant disparities in safety enforcement, exposing critical fairness gaps in current alignment and safety protocols. The study provides a reproducible methodological foundation and empirical evidence to guide the development of more robust, equitable AI safety strategies.
📝 Abstract
Safety guardrails in large language models(LLMs) are developed to prevent malicious users from generating toxic content at a large scale. However, these measures can inadvertently introduce or reflect new biases, as LLMs may refuse to generate harmful content targeting some demographic groups and not others. We explore this selective refusal bias in LLM guardrails through the lens of refusal rates of targeted individual and intersectional demographic groups, types of LLM responses, and length of generated refusals. Our results show evidence of selective refusal bias across gender, sexual orientation, nationality, and religion attributes. This leads us to investigate additional safety implications via an indirect attack, where we target previously refused groups. Our findings emphasize the need for more equitable and robust performance in safety guardrails across demographic groups.