🤖 AI Summary
Multimodal large language models (MLLMs) often inherit societal biases from training data, leading to discriminatory outputs along dimensions such as race and gender. To address this, we propose Anti-Stereotypical Debiasing (ASD), a lightweight, plug-in debiasing strategy. ASD introduces the first Counterfactual Multisocial Concept (CMSC) dataset—covering diverse social attributes with high conceptual diversity—and integrates bias-aware sampling with autoregressive loss reweighting for end-to-end training compatibility. Evaluated across multiple state-of-the-art MLLMs, ASD reduces social bias significantly (average BiasScore reduction of 37%) without compromising multimodal understanding: visual-language task accuracy remains stable within ±0.3%. This work pioneers the integration of systematic counterfactual data construction with dynamic loss modulation, delivering a scalable, low-intrusion solution for enhancing fairness in MLLMs.
📝 Abstract
Multi-modal Large Language Models (MLLMs) have advanced significantly, offering powerful vision-language understanding capabilities. However, these models often inherit severe social biases from their training datasets, leading to unfair predictions based on attributes like race and gender. This paper addresses the issue of social biases in MLLMs by i) Introducing a comprehensive Counterfactual dataset with Multiple Social Concepts (CMSC), which provides a more diverse and extensive training set compared to existing datasets. ii) Proposing an Anti-Stereotype Debiasing strategy (ASD). Our method works by revisiting the MLLM training process, rescaling the autoregressive loss function, and improving data sampling methods to counteract biases. Through extensive experiments on various MLLMs, our CMSC dataset and ASD method demonstrate a significant reduction in social biases while maintaining the models' original performance.