Aligned but Blind: Alignment Increases Implicit Bias by Reducing Awareness of Race

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a paradox wherein aligned language models inadvertently exacerbate racial stereotypes in implicit bias evaluation: under ambiguous contexts, their early transformer layers actively suppress racial concept representations, thereby compromising safety guardrails. Contrary to conventional “debiasing” paradigms, we propose— for the first time—the novel principle of “enhancing racial awareness.” Our approach leverages internal representation analysis, layer-wise concept probing, and gradient-guided intervention to explicitly strengthen model sensitivity to racial concepts. Experiments across multiple implicit association benchmarks demonstrate an average 42% reduction in implicit bias, while preserving explicit fairness and core language capabilities. These results empirically validate that “conscious fairness”—achieved through deliberate, representation-level awareness—is superior to “unconscious neutrality,” which risks latent harm via representational suppression. The work advances foundational understanding of how alignment mechanisms interact with social concept encoding and offers a principled alternative to prevailing debiasing strategies.

Technology Category

Application Category

📝 Abstract
Although value-aligned language models (LMs) appear unbiased in explicit bias evaluations, they often exhibit stereotypes in implicit word association tasks, raising concerns about their fair usage. We investigate the mechanisms behind this discrepancy and find that alignment surprisingly amplifies implicit bias in model outputs. Specifically, we show that aligned LMs, unlike their unaligned counterparts, overlook racial concepts in early internal representations when the context is ambiguous. Not representing race likely fails to activate safety guardrails, leading to unintended biases. Inspired by this insight, we propose a new bias mitigation strategy that works by incentivizing the representation of racial concepts in the early model layers. In contrast to conventional mitigation methods of machine unlearning, our interventions find that steering the model to be more aware of racial concepts effectively mitigates implicit bias. Similar to race blindness in humans, ignoring racial nuances can inadvertently perpetuate subtle biases in LMs.
Problem

Research questions and friction points this paper is trying to address.

Aligned LMs increase implicit bias by reducing race awareness
Unaligned LMs overlook racial concepts in ambiguous contexts
Proposing early-layer racial concept representation to mitigate bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Alignment increases implicit bias in LMs
Early layer racial concept representation mitigates bias
Steering model awareness reduces unintended biases
🔎 Similar Papers
No similar papers found.