FairSteer: Inference Time Debiasing for LLMs with Dynamic Activation Steering

📅 2025-04-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inherent societal biases in large language models (LLMs) during inference—without fine-tuning or custom prompting—this paper proposes a real-time, parameter-free debiasing framework. Methodologically, it introduces the linear representation hypothesis into dynamic activation space intervention, establishing a three-stage paradigm: (1) bias detection, (2) construction of bias-direction vectors (DSVs), and (3) real-time steering in hidden-state space. A lightweight linear classifier, trained on contrastive prompts, identifies separable bias directions; subsequent dynamic projection-based interventions are applied to feed-forward and attention hidden layers. Evaluated across six mainstream LLMs, the approach significantly improves fairness metrics on question answering, counterfactual evaluation, and open-ended generation tasks—while preserving original model performance with zero accuracy degradation. It exhibits strong cross-model generalizability and effectively circumvents both prompt sensitivity and fine-tuning overhead.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are prone to capturing biases from training corpus, leading to potential negative social impacts. Existing prompt-based debiasing methods exhibit instability due to their sensitivity to prompt changes, while fine-tuning-based techniques incur substantial computational overhead and catastrophic forgetting. In this paper, we propose FairSteer, a novel inference-time debiasing framework without requiring customized prompt design or model retraining. Motivated by the linear representation hypothesis, our preliminary investigation demonstrates that fairness-related features can be encoded into separable directions in the hidden activation space. FairSteer operates in three steps: biased activation detection, debiasing steering vector (DSV) computation, and dynamic activation steering. Specifically, it first trains a lightweight linear classifier to detect bias signatures in activations, and then computes DSVs as intervention directions derived from small contrastive prompt pairs. Subsequently, it performs debiasing by adjusting activations with DSVs in the inference stage. Comprehensive evaluation with six LLMs demonstrates the superiority of FairSteer across question-answering, counterfactual input evaluation and open-ended text generation tasks. Code will be released.
Problem

Research questions and friction points this paper is trying to address.

Debiasing LLMs without prompt sensitivity
Reducing computational cost of fairness methods
Preventing catastrophic forgetting in model tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic activation steering for debiasing
Lightweight linear classifier detects bias
Contrastive prompts derive intervention directions
🔎 Similar Papers
No similar papers found.
Y
Yichen Li
Zhejiang University
Z
Zhiting Fan
Zhejiang University
Ruizhe Chen
Ruizhe Chen
Zhejiang University
LLMMLLM
Xiaotang Gai
Xiaotang Gai
Zhejiang University
L
Luqi Gong
Research Center for Space Computing System, Zhejiang Lab
Y
Yan Zhang
Zhejiang University
Zuozhu Liu
Zuozhu Liu
Assistant Professor, Zhejiang University/University of Illinois Urbana-Champaign
deep learningvision-language modelsmedical AI