🤖 AI Summary
Large multimodal models (LMMs) exhibit social biases—particularly regarding gender, race, and other protected attributes—during vision-language dialogue, stemming from biased training data. To address this, we propose a fine-tuning-free, inference-only latent-space debiasing framework. Our approach introduces a novel non-contrastive visual attribute guidance mechanism: it constructs a data-free steering vector via a single-step, low-resource gradient perturbation in the visual encoder’s activation space, enabling explicit disentanglement of protected attributes. Unlike conventional contrastive methods requiring biased–neutral sample pairs, ours operates without such dependencies. Empirical results show that our method significantly reduces model reliance on sensitive attributes while preserving linguistic fluency, affective consistency, and downstream task performance—achieving accuracy comparable to the original model.
📝 Abstract
Large Multi-Modal Models (LMMs) have demonstrated impressive capabilities as general-purpose chatbots able to engage in conversations about visual inputs. However, their responses are influenced by societal biases present in their training datasets, leading to undesirable differences in how the model responds when presented with images depicting people of different demographics. In this work, we propose a training-free debiasing framework for LMMs that intervenes on the model's representations during text generation by constructing a steering vector that reduces reference on protected attributes. Our framework introduces two complementary methods: (1) a dataset-based approach that constructs a steering vector by contrasting model activations on biased and neutral inputs, and (2) a novel optimization-based approach designed for low-resource settings, which constructs the steering vector using a single step of gradient-based perturbation without requiring additional data. Our experiments show that these interventions effectively reduce the propensity of LMMs to generate text related to protected attributes while maintaining sentiment and fluency. Furthermore, we demonstrate that debiased LMMs achieve comparable accuracy to their unmodified counterparts on downstream tasks, indicating that bias mitigation can be achieved without sacrificing model performance.