ViG-Bias: Visually Grounded Bias Discovery and Mitigation

📅 2024-07-02
🏛️ European Conference on Computer Vision
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of localizing and quantifying implicit societal biases (e.g., gender, racial biases) in multimodal vision-language models. We propose the first vision-grounded bias detection and mitigation framework. Our method enables pixel-level bias analysis—elevating bias assessment from the text level to fine-grained image regions—by explicitly aligning visual patches with textual attributes. It integrates contrastive vision-language modeling, interpretable attention analysis, causal intervention, and adversarial debiasing training to enable automatic bias discovery, attribution localization, and dynamic mitigation. Evaluated across multiple benchmarks, our approach reduces bias metrics by an average of 42% while preserving downstream task accuracy. Key contributions include: (1) the first pixel-level bias localization mechanism; (2) an end-to-end interpretable attribution framework; and (3) a lightweight, annotation-free debiasing strategy that requires no labeled bias data.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Bias Detection
Machine Learning Models
Visual Explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

ViG-Bias
Bias Detection
Visual Interpretability
🔎 Similar Papers
No similar papers found.
B
Badr-Eddine Marani
CentraleSupelec, Université Paris-Saclay, France
M
Mohamed Hanini
CentraleSupelec, Université Paris-Saclay, France
N
Nihitha Malayarukil
CentraleSupelec, Université Paris-Saclay, France
S
S. Christodoulidis
CentraleSupelec, Université Paris-Saclay, France
M
M. Vakalopoulou
CentraleSupelec, Université Paris-Saclay, France; Archimedes/Athena RC, Greece
Enzo Ferrante
Enzo Ferrante
CONICET & Universidad de Buenos Aires
Medical ImagingMachine LearningComputer VisionML Fairness