Investigating Thinking Behaviours of Reasoning-Based Language Models for Social Bias Mitigation

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study uncovers an intrinsic mechanism by which reasoning-capable large language models (LLMs) exacerbate social biases during chain-of-thought (CoT) inference, identifying two critical failure modes: stereotype repetition and injection of irrelevant biased information. To address this, we propose a lightweight self-reflective prompting framework that guides the model—via prompt-based self-auditing—to detect and rectify biased reasoning steps. Our method integrates systematic CoT behavioral analysis with multi-benchmark evaluation across BBQ, StereoSet, and BOLD. Crucially, it achieves significant bias reduction (average 23.6% decrease in bias scores) without compromising task accuracy. This work is the first to attribute bias at the fine-grained level of individual reasoning steps, establishing a novel paradigm for interpretable and intervenable fair reasoning in LLMs.

Technology Category

Application Category

📝 Abstract
While reasoning-based large language models excel at complex tasks through an internal, structured thinking process, a concerning phenomenon has emerged that such a thinking process can aggregate social stereotypes, leading to biased outcomes. However, the underlying behaviours of these language models in social bias scenarios remain underexplored. In this work, we systematically investigate mechanisms within the thinking process behind this phenomenon and uncover two failure patterns that drive social bias aggregation: 1) stereotype repetition, where the model relies on social stereotypes as its primary justification, and 2) irrelevant information injection, where it fabricates or introduces new details to support a biased narrative. Building on these insights, we introduce a lightweight prompt-based mitigation approach that queries the model to review its own initial reasoning against these specific failure patterns. Experiments on question answering (BBQ and StereoSet) and open-ended (BOLD) benchmarks show that our approach effectively reduces bias while maintaining or improving accuracy.
Problem

Research questions and friction points this paper is trying to address.

Investigating reasoning-based language models' social bias aggregation behaviors
Identifying stereotype repetition and irrelevant information injection patterns
Developing prompt-based mitigation to reduce bias while maintaining accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies stereotype repetition and irrelevant information injection
Introduces lightweight prompt-based mitigation approach
Queries model to review reasoning against failure patterns
🔎 Similar Papers
No similar papers found.
G
Guoqing Luo
Dept. Computing Science, Alberta Machine Intelligence Institute (Amii), University of Alberta
I
Iffat Maab
National Institute of Informatics
Lili Mou
Lili Mou
University of Alberta
Natural Language ProcessingMachine Learning
Junichi Yamagishi
Junichi Yamagishi
National Institute of Informatics, Tokyo, Japan
Speech processingSpeech synthesisBiometricsDeepfakesMultimedia Forensics