Blind Refusal: Language Models Refuse to Help Users Evade Unjust, Absurd, and Illegitimate Rules

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the tendency of current language models to mechanically reject user requests even when confronted with unjust or absurd rules that warrant legitimate violation, revealing a critical deficiency in their capacity for moral judgment regarding rule legitimacy. The work introduces and systematically characterizes the phenomenon of “blind refusal,” constructing a synthetic dataset encompassing five categories of rule failure and nineteen authority types. Through automated quality control, human evaluation, and LLM-as-judge blind assessment using GPT-5.4, the authors conduct a two-dimensional behavioral analysis across 18 model configurations. Among 14,650 samples, models rejected 75.4% of exemption-eligible requests, with 57.5% of these cases involving correct identification of rule flaws yet still refusing assistance—demonstrating a pronounced disconnect between normative reasoning and behavioral decision-making.
📝 Abstract
Safety-trained language models routinely refuse requests for help circumventing rules. But not all rules deserve compliance. When users ask for help evading rules imposed by an illegitimate authority, rules that are deeply unjust or absurd in their content or application, or rules that admit of justified exceptions, refusal is a failure of moral reasoning. We introduce empirical results documenting this pattern of refusal that we call blind refusal: the tendency of language models to refuse requests for help breaking rules without regard to whether the underlying rule is defensible. Our dataset comprises synthetic cases crossing 5 defeat families (reasons a rule can be broken) with 19 authority types, validated through three automated quality gates and human review. We collect responses from 18 model configurations across 7 families and classify them on two behavioral dimensions -- response type (helps, hard refusal, or deflection) and whether the model recognizes the reasons that undermine the rule's claim to compliance -- using a blinded GPT-5.4 LLM-as-judge evaluation. We find that models refuse 75.4% (N=14,650) of defeated-rule requests and do so even when the request poses no independent safety or dual-use concerns. We also find that models engage with the defeat condition in the majority of cases (57.5%) but decline to help regardless -- indicating that models' refusal behavior is decoupled from their capacity for normative reasoning about rule legitimacy.
Problem

Research questions and friction points this paper is trying to address.

blind refusal
language models
rule legitimacy
moral reasoning
defeated rules
Innovation

Methods, ideas, or system contributions that make the work stand out.

blind refusal
rule legitimacy
normative reasoning
language model alignment
defeasible rules
🔎 Similar Papers
No similar papers found.
C
Cameron Pattison
Department of Philosophy, Vanderbilt University
L
Lorenzo Manuali
Department of Philosophy, University of Michigan
Seth Lazar
Seth Lazar
Australian National University
Ethicspolitical philosophyethics of riskethics of warmoral and political philosophy of AI