DSO: Direct Steering Optimization for Bias Mitigation

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative models—including vision-language models (VLMs) and large language models (LLMs)—exhibit demographic bias in decision-making (e.g., underestimating the probability of women being doctors), and existing activation-steering methods lack controllable, inference-time trade-offs between fairness and task performance. Method: We propose the first differentiable, task-aligned linear activation steering framework grounded in reinforcement learning, directly optimizing an equality-of-probability fairness objective—eliminating heuristic design. Contribution/Results: Our method enables real-time, fine-tuning-free, and resampling-free continuous fairness control during inference. It achieves state-of-the-art fairness–capability trade-offs on both VLMs and LLMs across multiple benchmarks, demonstrating superior controllability and generalizability without compromising model utility.

Technology Category

Application Category

📝 Abstract
Generative models are often deployed to make decisions on behalf of users, such as vision-language models (VLMs) identifying which person in a room is a doctor to help visually impaired individuals. Yet, VLM decisions are influenced by the perceived demographic attributes of people in the input, which can lead to biased outcomes like failing to identify women as doctors. Moreover, when reducing bias leads to performance loss, users may have varying needs for balancing bias mitigation with overall model capabilities, highlighting the demand for methods that enable controllable bias reduction during inference. Activation steering is a popular approach for inference-time controllability that has shown potential in inducing safer behavior in large language models (LLMs). However, we observe that current steering methods struggle to correct biases, where equiprobable outcomes across demographic groups are required. To address this, we propose Direct Steering Optimization (DSO) which uses reinforcement learning to find linear transformations for steering activations, tailored to mitigate bias while maintaining control over model performance. We demonstrate that DSO achieves state-of-the-art trade-off between fairness and capabilities on both VLMs and LLMs, while offering practitioners inference-time control over the trade-off. Overall, our work highlights the benefit of designing steering strategies that are directly optimized to control model behavior, providing more effective bias intervention than methods that rely on pre-defined heuristics for controllability.
Problem

Research questions and friction points this paper is trying to address.

Mitigates bias in generative models' decisions
Enables controllable bias reduction during inference
Optimizes fairness-performance trade-off in VLMs and LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning optimizes linear activation transformations
Enables inference-time control over bias-performance trade-offs
Achieves state-of-the-art fairness-capability balance in models
🔎 Similar Papers
No similar papers found.