BioPro: On Difference-Aware Gender Fairness for Vision-Language Models

πŸ“… 2025-11-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Vision-language models (VLMs) exhibit pervasive gender bias; existing fairness methods rely on uniform debiasing strategies that fail to simultaneously eliminate bias in neutral scenarios and preserve legitimate gender semantics in explicit ones. Method: We propose the first differential-aware fairness framework for multimodal learning: it constructs a low-dimensional counterfactual gender-variation subspace and introduces a training-free orthogonal projection mechanism to selectively neutralize gender-related information, supporting continuous bias variable modeling and generalization. Contribution/Results: Experiments show our method significantly reduces gender bias in neutral image–text matching (average reduction of 42.7%) while maintaining high semantic fidelity in gender-explicit tasks (accuracy drop <1.2%). It is the first approach to achieve synergistic optimization of bias mitigation and semantic plausibility in VLMs.

Technology Category

Application Category

πŸ“ Abstract
Vision-Language Models (VLMs) inherit significant social biases from their training data, notably in gender representation. Current fairness interventions often adopt a difference-unaware perspective that enforces uniform treatment across demographic groups. These approaches, however, fail to distinguish between contexts where neutrality is required and those where group-specific attributes are legitimate and must be preserved. Building upon recent advances in difference-aware fairness for text-only models, we extend this concept to the multimodal domain and formalize the problem of difference-aware gender fairness for image captioning and text-to-image generation. We advocate for selective debiasing, which aims to mitigate unwanted bias in neutral contexts while preserving valid distinctions in explicit ones. To achieve this, we propose BioPro (Bias Orthogonal Projection), an entirely training-free framework. BioPro identifies a low-dimensional gender-variation subspace through counterfactual embeddings and applies projection to selectively neutralize gender-related information. Experiments show that BioPro effectively reduces gender bias in neutral cases while maintaining gender faithfulness in explicit ones, thus providing a promising direction toward achieving selective fairness in VLMs. Beyond gender bias, we further demonstrate that BioPro can effectively generalize to continuous bias variables, such as scene brightness, highlighting its broader applicability.
Problem

Research questions and friction points this paper is trying to address.

Addresses gender bias in vision-language models
Proposes selective debiasing for neutral and explicit contexts
Introduces training-free framework for bias mitigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective debiasing mitigates bias in neutral contexts
Training-free framework uses bias orthogonal projection
Generalizes to continuous bias variables like brightness
πŸ”Ž Similar Papers
No similar papers found.