No for Some, Yes for Others: Persona Prompts and Other Sources of False Refusal in Language Models

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how demographic persona prompts—spanning 15 categories including gender, race, religion, and disability—affect large language models’ (LLMs’) propensity to erroneously refuse user requests. Addressing the lack of quantitative evidence in prior studies, we propose a Monte Carlo sampling–based evaluation framework that controls for confounding variables such as model capability, task type, and prompt variation, and integrates multi-task benchmarks—including natural language inference and politeness/offensiveness classification. Our experiments reveal that persona-induced refusal bias is substantially overestimated; intrinsic model capability (e.g., stronger models exhibit greater robustness) and task design exert far greater influence than persona alone. Moreover, certain persona–model pairings exhibit anomalously high refusal rates, exposing latent sociocultural biases embedded in alignment strategies. To our knowledge, this is the first study enabling controlled, attributable analysis of persona-driven disparities in LLM refusal behavior—providing both methodological rigor and empirical grounding for trustworthy AI alignment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly integrated into our daily lives and personalized. However, LLM personalization might also increase unintended side effects. Recent work suggests that persona prompting can lead models to falsely refuse user requests. However, no work has fully quantified the extent of this issue. To address this gap, we measure the impact of 15 sociodemographic personas (based on gender, race, religion, and disability) on false refusal. To control for other factors, we also test 16 different models, 3 tasks (Natural Language Inference, politeness, and offensiveness classification), and nine prompt paraphrases. We propose a Monte Carlo-based method to quantify this issue in a sample-efficient manner. Our results show that as models become more capable, personas impact the refusal rate less and less. Certain sociodemographic personas increase false refusal in some models, which suggests underlying biases in the alignment strategies or safety mechanisms. However, we find that the model choice and task significantly influence false refusals, especially in sensitive content tasks. Our findings suggest that persona effects have been overestimated, and might be due to other factors.
Problem

Research questions and friction points this paper is trying to address.

Persona prompts increase false refusal in language models
Sociodemographic personas reveal biases in model alignment strategies
Model choice and task significantly influence false refusal rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Monte Carlo method quantifies refusal efficiently
Tested 15 sociodemographic personas across models
Analyzed persona impact on refusal rates systematically
🔎 Similar Papers
No similar papers found.