🤖 AI Summary
Large language models (LLMs) exhibit implicit social bias during complex logical reasoning, yet existing benchmarks lack fine-grained, controllable, and automated methods to quantify such biases.
Method: We propose PRIME, the first evaluation framework specifically designed to measure social bias in LLMs’ logical reasoning. PRIME leverages logic grid puzzles—structured constraint-satisfaction tasks—and employs automated generation and validation to construct controlled test sets spanning stereotypical, counter-stereotypical, and neutral social attributes (e.g., gender, occupation).
Contribution/Results: PRIME enables precise, comparative, and scalable bias diagnosis. Empirical evaluation reveals that LLMs achieve significantly higher accuracy when puzzle solutions align with societal stereotypes (e.g., gendered role assumptions), confirming systematic social bias in their reasoning process. As the first framework to adapt logic grid puzzles for LLM bias assessment, PRIME supports rigorous, empirical evaluation of debiasing interventions and advances trustworthy AI reasoning through novel methodology and actionable evidence.
📝 Abstract
While recent safety guardrails effectively suppress overtly biased outputs, subtler forms of social bias emerge during complex logical reasoning tasks that evade current evaluation benchmarks. To fill this gap, we introduce a new evaluation framework, PRIME (Puzzle Reasoning for Implicit Biases in Model Evaluation), that uses logic grid puzzles to systematically probe the influence of social stereotypes on logical reasoning and decision making in LLMs. Our use of logic puzzles enables automatic generation and verification, as well as variability in complexity and biased settings. PRIME includes stereotypical, anti-stereotypical, and neutral puzzle variants generated from a shared puzzle structure, allowing for controlled and fine-grained comparisons. We evaluate multiple model families across puzzle sizes and test the effectiveness of prompt-based mitigation strategies. Focusing our experiments on gender stereotypes, our findings highlight that models consistently reason more accurately when solutions align with stereotypical associations. This demonstrates the significance of PRIME for diagnosing and quantifying social biases perpetuated in the deductive reasoning of LLMs, where fairness is critical.