Debias your Large Multi-Modal Model at Test-Time with Non-Contrastive Visual Attribute Steering

📅 2024-11-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large multimodal models (LMMs) exhibit social biases—particularly regarding gender, race, and other protected attributes—during vision-language dialogue, stemming from biased training data. To address this, we propose a fine-tuning-free, inference-only latent-space debiasing framework. Our approach introduces a novel non-contrastive visual attribute guidance mechanism: it constructs a data-free steering vector via a single-step, low-resource gradient perturbation in the visual encoder’s activation space, enabling explicit disentanglement of protected attributes. Unlike conventional contrastive methods requiring biased–neutral sample pairs, ours operates without such dependencies. Empirical results show that our method significantly reduces model reliance on sensitive attributes while preserving linguistic fluency, affective consistency, and downstream task performance—achieving accuracy comparable to the original model.

Technology Category

Application Category

📝 Abstract
Large Multi-Modal Models (LMMs) have demonstrated impressive capabilities as general-purpose chatbots able to engage in conversations about visual inputs. However, their responses are influenced by societal biases present in their training datasets, leading to undesirable differences in how the model responds when presented with images depicting people of different demographics. In this work, we propose a training-free debiasing framework for LMMs that intervenes on the model's representations during text generation by constructing a steering vector that reduces reference on protected attributes. Our framework introduces two complementary methods: (1) a dataset-based approach that constructs a steering vector by contrasting model activations on biased and neutral inputs, and (2) a novel optimization-based approach designed for low-resource settings, which constructs the steering vector using a single step of gradient-based perturbation without requiring additional data. Our experiments show that these interventions effectively reduce the propensity of LMMs to generate text related to protected attributes while maintaining sentiment and fluency. Furthermore, we demonstrate that debiased LMMs achieve comparable accuracy to their unmodified counterparts on downstream tasks, indicating that bias mitigation can be achieved without sacrificing model performance.
Problem

Research questions and friction points this paper is trying to address.

Mitigate societal biases in Large Multi-Modal Models (LMMs).
Develop training-free debiasing framework for LMMs.
Maintain model performance while reducing bias in text generation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free debiasing framework for LMMs
Dataset-based steering vector construction
Optimization-based steering vector for low-resource settings
🔎 Similar Papers
No similar papers found.