🤖 AI Summary
Visual biases in computer vision models lead to unfairness, unreliability, and poor generalization. However, current research is severely hindered by fragmented method implementations, inconsistent evaluation protocols, and heterogeneous datasets and metrics—undermining reproducibility and fair comparative analysis. To address this, we introduce FairVision, the first open-source, unified framework for fairness-aware vision research. It integrates 12 state-of-the-art bias mitigation methods—including adversarial training, reweighting, and feature disentanglement—alongside seven benchmark datasets and a standardized evaluation protocol. Built on a modular PyTorch architecture, FairVision supports multimodal inputs and customizable fairness/accuracy metrics. Comprehensive cross-dataset evaluation demonstrates that FairVision significantly improves the fairness–accuracy trade-off. By providing a reproducible, extensible, and standardized platform, it establishes a new foundation for rigorous, comparable, and scalable research in fair computer vision.
📝 Abstract
Bias in computer vision models remains a significant challenge, often resulting in unfair, unreliable, and non-generalizable AI systems. Although research into bias mitigation has intensified, progress continues to be hindered by fragmented implementations and inconsistent evaluation practices. Disparate datasets and metrics used across studies complicate reproducibility, making it difficult to fairly assess and compare the effectiveness of various approaches. To overcome these limitations, we introduce the Visual Bias Mitigator (VB-Mitigator), an open-source framework designed to streamline the development, evaluation, and comparative analysis of visual bias mitigation techniques. VB-Mitigator offers a unified research environment encompassing 12 established mitigation methods, 7 diverse benchmark datasets. A key strength of VB-Mitigator is its extensibility, allowing for seamless integration of additional methods, datasets, metrics, and models. VB-Mitigator aims to accelerate research toward fairness-aware computer vision models by serving as a foundational codebase for the research community to develop and assess their approaches. To this end, we also recommend best evaluation practices and provide a comprehensive performance comparison among state-of-the-art methodologies.