Fairness Mediator: Neutralize Stereotype Associations to Mitigate Bias in Large Language Models

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) inherit and amplify societal stereotypes due to spurious correlations in training data, posing significant fairness risks. To address this, we propose a fairness calibration method grounded in key-value conceptual representations within MLP layers: we formalize bias as linear associative memory and introduce an interpretable stereotypical association probe alongside a novel adversarial neutralizer that dynamically decouples biased key-value pairs between entities and attributes during inference. Our method achieves lossless calibration via three stages: prompt-driven activation probing, intra-MLP adversarial intervention, and bias representation disentanglement. Evaluated across nine protected attributes, it substantially outperforms state-of-the-art approaches, improving bias mitigation efficiency by orders of magnitude—up to hundreds of times—while preserving core language understanding capabilities.

Technology Category

Application Category

📝 Abstract
LLMs have demonstrated remarkable performance across diverse applications, yet they inadvertently absorb spurious correlations from training data, leading to stereotype associations between biased concepts and specific social groups. These associations perpetuate and even amplify harmful social biases, raising significant fairness concerns. To mitigate such biases, prior studies have attempted to project model embeddings into unbiased spaces during inference. However, these approaches have shown limited effectiveness due to their weak alignment with downstream social biases. Inspired by the observation that concept cognition in LLMs is primarily represented through a linear associative memory mechanism, where key-value mapping occurs in the MLP layers, we posited that biased concepts and social groups are similarly encoded as entity (key) and information (value) pairs, which can be manipulated to promote fairer associations. To this end, we propose Fairness Mediator (FairMed), a bias mitigation framework that neutralizes stereotype associations. Our framework comprises two main components: a stereotype association prober and an adversarial debiasing neutralizer. The prober captures stereotype associations encoded within MLP layer activations by employing prompts centered around biased concepts to detect the emission probabilities for social groups. Subsequently, the adversarial debiasing neutralizer intervenes in MLP activations during inference to equalize the association probabilities among different social groups. Extensive experiments across nine protected attributes show that FairMed significantly outperforms SOTA methods in effectiveness. Compared to the most effective baseline, FairMed presents competitive efficiency by cutting mitigation overhead by hundreds of minutes. FairMed also maintains the LLM's language understanding capabilities without compromising overall performance.
Problem

Research questions and friction points this paper is trying to address.

Mitigate bias in LLMs by neutralizing stereotype associations
Address weak alignment of prior methods with social biases
Maintain LLM performance while reducing harmful stereotype associations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neutralizes stereotype associations via MLP layer manipulation
Uses adversarial debiasing to equalize group association probabilities
Maintains LLM performance while reducing mitigation overhead
🔎 Similar Papers
No similar papers found.