🤖 AI Summary
This work addresses the challenge of mitigating social bias in large language models without compromising scalability or multi-turn interaction quality—limitations commonly associated with existing debiasing approaches such as fine-tuning or prompt engineering. The authors propose a novel intervention that requires neither model fine-tuning nor prompt modification. Their method leverages cross-group contrastive analysis to identify stereotypical terms and integrates integrated gradients to construct a bidirectional bias attribution framework, enabling precise localization of bias-related neurons. Activation of these neurons is then directly modulated at the projection layer. Evaluated on three prominent large language models, the approach significantly reduces social bias while preserving overall model performance, effectively balancing debiasing efficacy with user experience.
📝 Abstract
Large language models (LLMs) have demonstrated impressive capabilities across a wide range of natural language processing tasks. However, their outputs often exhibit social biases, raising fairness concerns. Existing debiasing methods, such as fine-tuning on additional datasets or prompt engineering, face scalability issues or compromise user experience in multi-turn interactions. To address these challenges, we propose a framework for detecting stereotype-inducing words and attributing neuron-level bias in LLMs, without the need for fine-tuning or prompt modification. Our framework first identifies stereotype-inducing adjectives and nouns via comparative analysis across demographic groups. We then attribute biased behavior to specific neurons using two attribution strategies based on integrated gradients. Finally, we mitigate bias by directly intervening on their activations at the projection layer. Experiments on three widely used LLMs demonstrate that our method effectively reduces bias while preserving overall model performance. Code is available at the github link: https://github.com/XMUDeepLIT/Bi-directional-Bias-Attribution.