Common-Sense Bias Discovery and Mitigation for Classification Tasks

📅 2024-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Image classification models often suffer from spurious correlations rooted in commonsense knowledge (e.g., “cow → pasture”) present in training data, degrading generalization. This paper proposes the first text description–based general framework for unsupervised bias discovery and mitigation. First, noun phrase embeddings are extracted from image-associated textual descriptions and clustered semantically to automatically identify cross-sample commonsense bias patterns. Second, co-occurrence statistics and a human-in-the-loop interface jointly validate biases and enable on-demand intervention. Finally, feature decorrelation is achieved via data reweighting. Evaluated on multiple benchmark datasets, our method systematically uncovers previously unknown commonsense bias patterns—demonstrating, for the first time, that such biases are prevalent and discoverable from text alone. After debiasing, model accuracy and robustness significantly surpass those of existing unsupervised approaches.

Technology Category

Application Category

📝 Abstract
Machine learning model bias can arise from dataset composition: correlated sensitive features can disturb the downstream classification model's decision boundary and lead to performance differences along these features. Existing de-biasing works tackle most prominent bias features, like colors of digits or background of animals. However, a real-world dataset often includes a large number of feature correlations, that manifest intrinsically in the data as common sense information. Such spurious visual cues can further reduce model robustness. Thus, practitioners desire the whole picture of correlations and flexibility to treat concerned bias for specific domain tasks. With this goal, we propose a novel framework to extract comprehensive bias information in image datasets based on textual descriptions, a common sense-rich modality. Specifically, features are constructed by clustering noun phrase embeddings of similar semantics. Each feature's appearance across a dataset is inferred and their co-occurrence statistics are measured, with spurious correlations optionally examined by a human-in-the-loop interface. Downstream experiments show that our method discovers novel model biases on multiple image benchmark datasets. Furthermore, the discovered bias can be mitigated by a simple data re-weighting strategy that de-correlates the features, and outperforms state-of-the-art unsupervised bias mitigation methods.
Problem

Research questions and friction points this paper is trying to address.

Machine Learning Bias
Dataset Common Sense Bias
Model Vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bias Identification
Weight Adjustment
Improved Debiasing Technique
🔎 Similar Papers
No similar papers found.
M
Miao Zhang
New York University
Z
Zee Fryer
Reality Defender Inc
B
Ben Colman
Reality Defender Inc
A
Ali Shahriyari
Reality Defender Inc
Gaurav Bharaj
Gaurav Bharaj
Unknown affiliation
AI