Quantifying Query Fairness Under Unawareness

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional ranking algorithms often inherit data biases, adversely affecting marginalized groups. In information access systems where sensitive attributes (e.g., demographic labels) are unavailable, existing fairness evaluation methods rely on ground-truth group labels—leading to the “fairness-under-ignorance” dilemma. Classifier-based label inference, meanwhile, suffers from poor robustness under distribution shift, resulting in inaccurate fairness measurement. This paper introduces the first reliable fairness evaluation protocol spanning multiple queries and multiple sensitive attributes. We propose a quantification-based robust group prevalence estimator that overcomes the unreliability of label prediction, enabling joint modeling of multiple sensitive attributes and bias-aware ranking evaluation. Experiments demonstrate that our approach significantly outperforms existing baselines in accuracy, stability, and scalability across diverse settings.

Technology Category

Application Category

📝 Abstract
Traditional ranking algorithms are designed to retrieve the most relevant items for a user's query, but they often inherit biases from data that can unfairly disadvantage vulnerable groups. Fairness in information access systems (IAS) is typically assessed by comparing the distribution of groups in a ranking to a target distribution, such as the overall group distribution in the dataset. These fairness metrics depend on knowing the true group labels for each item. However, when groups are defined by demographic or sensitive attributes, these labels are often unknown, leading to a setting known as"fairness under unawareness". To address this, group membership can be inferred using machine-learned classifiers, and group prevalence is estimated by counting the predicted labels. Unfortunately, such an estimation is known to be unreliable under dataset shift, compromising the accuracy of fairness evaluations. In this paper, we introduce a robust fairness estimator based on quantification that effectively handles multiple sensitive attributes beyond binary classifications. Our method outperforms existing baselines across various sensitive attributes and, to the best of our knowledge, is the first to establish a reliable protocol for measuring fairness under unawareness across multiple queries and groups.
Problem

Research questions and friction points this paper is trying to address.

Measuring fairness in rankings without true group labels
Handling unreliable group estimates under dataset shift
Extending fairness evaluation to multiple sensitive attributes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robust fairness estimator using quantification
Handles multiple sensitive attributes effectively
Reliable protocol for fairness under unawareness
🔎 Similar Papers
No similar papers found.