🤖 AI Summary
This study addresses the lack of effective metrics for individual fairness in existing community detection methods, which may lead to similar nodes being assigned to different communities in an unfair manner. The authors propose a novel vector distance measure derived from the community co-occurrence matrix, enabling, for the first time, a computable quantification of individual fairness. They systematically evaluate the fairness–performance trade-offs of several algorithms—including Significance, Surprise, Combo, Leiden, and SBMDL—on both synthetic and real-world networks. Their analysis reveals that individual and group fairness are not interchangeable and are significantly influenced by the detectability of community structure. Notably, high group fairness or clustering accuracy does not guarantee individual fairness. The study further identifies algorithms that achieve superior fairness–quality trade-offs in dense versus sparse graphs.
📝 Abstract
Community detection is a fundamental task in complex network analysis. Fairness-aware community detection seeks to prevent biased node partitions, typically framed in terms of individual fairness, which requires similar nodes to be treated similarly, and group fairness, which aims to avoid disadvantaging specific groups of nodes. While existing literature on fair community detection has primarily focused on group fairness, we introduce a novel measure to quantify individual fairness in community detection methods. The proposed measure captures unfairness as the vectorial distance between a node's true and predicted community representations, computed using the community co-occurrence matrix. We provide a comprehensive empirical investigation of a broad set of community detection algorithms from the literature on both synthetic networks, with varying levels of community explicitness, and real-world networks. We particularly investigate the fairness-performance trade-off using standard quality metrics and compare individual fairness outcomes with existing group fairness measures. The results show that individual unfairness can occur even when group fairness or clustering accuracy is high, underscoring that individual and group fairness are not interchangeable. Moreover, fairness depends critically on the detectability of community structure. However, we find that Significance and Surprise for denser graphs, and Combo, Leiden, and SBMDL for sparser graphs result in a better trade-off between individual fairness and community quality. Overall, our findings, together with the fact that community detection is an important step in many network analysis downstream tasks, highlight the necessity of developing fairness-aware community detection methods.