Multi Attribute Bias Mitigation via Representation Learning

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world images often contain multiple superimposed biases—e.g., texture artifacts, watermarks, gender-makeup correlations, and scene-object couplings—that jointly degrade model robustness and fairness; mitigating single biases in isolation risks bias transfer. To address this, we propose a two-stage representation learning framework: (1) a group-label–guided multi-attribute feature disentanglement and gradient-suppression fine-tuning stage, and (2) an adaptive bias-integration learning (ABIL) stage that dynamically suppresses multiple biases at test time. We introduce the first end-to-end unified paradigm for mitigating multi-attribute biases and propose Scaled Bias Amplification (SBA), a distribution-shift–robust bias evaluation metric. Experiments on FB-CMNIST, CelebA, and COCO show substantial improvements in worst-group accuracy, a 50% reduction in multi-attribute bias amplification, and the lowest SBA scores reported to date—demonstrating superior robustness and generalization.

Technology Category

Application Category

📝 Abstract
Real world images frequently exhibit multiple overlapping biases, including textures, watermarks, gendered makeup, scene object pairings, etc. These biases collectively impair the performance of modern vision models, undermining both their robustness and fairness. Addressing these biases individually proves inadequate, as mitigating one bias often permits or intensifies others. We tackle this multi bias problem with Generalized Multi Bias Mitigation (GMBM), a lean two stage framework that needs group labels only while training and minimizes bias at test time. First, Adaptive Bias Integrated Learning (ABIL) deliberately identifies the influence of known shortcuts by training encoders for each attribute and integrating them with the main backbone, compelling the classifier to explicitly recognize these biases. Then Gradient Suppression Fine Tuning prunes those very bias directions from the backbone's gradients, leaving a single compact network that ignores all the shortcuts it just learned to recognize. Moreover we find that existing bias metrics break under subgroup imbalance and train test distribution shifts, so we introduce Scaled Bias Amplification (SBA): a test time measure that disentangles model induced bias amplification from distributional differences. We validate GMBM on FB CMNIST, CelebA, and COCO, where we boost worst group accuracy, halve multi attribute bias amplification, and set a new low in SBA even as bias complexity and distribution shifts intensify, making GMBM the first practical, end to end multibias solution for visual recognition. Project page: http://visdomlab.github.io/GMBM/
Problem

Research questions and friction points this paper is trying to address.

Mitigating multiple overlapping biases in vision models
Addressing bias amplification from distributional differences
Improving robustness and fairness in visual recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage framework with adaptive bias learning
Gradient suppression fine-tuning for bias pruning
Scaled Bias Amplification metric for evaluation
🔎 Similar Papers