🤖 AI Summary
This paper identifies a paradox wherein aligned language models inadvertently exacerbate racial stereotypes in implicit bias evaluation: under ambiguous contexts, their early transformer layers actively suppress racial concept representations, thereby compromising safety guardrails. Contrary to conventional “debiasing” paradigms, we propose— for the first time—the novel principle of “enhancing racial awareness.” Our approach leverages internal representation analysis, layer-wise concept probing, and gradient-guided intervention to explicitly strengthen model sensitivity to racial concepts. Experiments across multiple implicit association benchmarks demonstrate an average 42% reduction in implicit bias, while preserving explicit fairness and core language capabilities. These results empirically validate that “conscious fairness”—achieved through deliberate, representation-level awareness—is superior to “unconscious neutrality,” which risks latent harm via representational suppression. The work advances foundational understanding of how alignment mechanisms interact with social concept encoding and offers a principled alternative to prevailing debiasing strategies.
📝 Abstract
Although value-aligned language models (LMs) appear unbiased in explicit bias evaluations, they often exhibit stereotypes in implicit word association tasks, raising concerns about their fair usage. We investigate the mechanisms behind this discrepancy and find that alignment surprisingly amplifies implicit bias in model outputs. Specifically, we show that aligned LMs, unlike their unaligned counterparts, overlook racial concepts in early internal representations when the context is ambiguous. Not representing race likely fails to activate safety guardrails, leading to unintended biases. Inspired by this insight, we propose a new bias mitigation strategy that works by incentivizing the representation of racial concepts in the early model layers. In contrast to conventional mitigation methods of machine unlearning, our interventions find that steering the model to be more aware of racial concepts effectively mitigates implicit bias. Similar to race blindness in humans, ignoring racial nuances can inadvertently perpetuate subtle biases in LMs.