🤖 AI Summary
Existing fairness evaluation methods for clustering overlook how protected attributes affect clustering quality, leading to a disconnection between fairness and quality. To address this, we propose FACROC—the first fairness metric for clustering grounded in ROC curve analysis. FACROC introduces ROC methodology into clustering evaluation: it quantifies overall clustering quality via the Area Under the Clustering Curve (AUCC), while characterizing fairness through the degree of shift and overlap among ROC curves computed separately for each protected subgroup. This enables a quality-aware, interpretable, and visually intuitive joint assessment of fairness and clustering performance. Extensive experiments across multiple benchmark datasets and state-of-the-art (fair) clustering algorithms demonstrate that FACROC significantly enhances discriminative power for fairness evaluation and improves interpretability, establishing a novel evaluation paradigm for fair clustering.
📝 Abstract
Fair clustering has attracted remarkable attention from the research community. Many fairness measures for clustering have been proposed; however, they do not take into account the clustering quality w.r.t. the values of the protected attribute. In this paper, we introduce a new visual-based fairness measure for fair clustering through ROC curves, namely FACROC. This fairness measure employs AUCC as a measure of clustering quality and then computes the difference in the corresponding ROC curves for each value of the protected attribute. Experimental results on several popular datasets for fairness-aware machine learning and well-known (fair) clustering models show that FACROC is a beneficial method for visually evaluating the fairness of clustering models.