🤖 AI Summary
This work addresses the misalignment between existing attributed graph clustering (AGC) research and industrial requirements, where methods often suffer from narrow evaluation protocols, poor scalability, and insufficient robustness to heterophily and noise. To bridge this gap, we propose Encode-Cluster-Optimize—a unified framework that decouples AGG into three modular components: encoding, clustering, and optimization—enabling, for the first time, a truly modular design of AGC approaches. Building on this framework, we reformulate the evaluation paradigm by integrating multidimensional metrics encompassing semantic alignment, structural integrity, and computational efficiency. Through large-scale benchmarking on real-world datasets, we empirically expose the limitations of current methods in practical scenarios. Our study not only reveals the “monoculture” bias prevalent in academic research but also establishes a systematic pathway toward deploying highly robust and scalable AGC systems in industrial applications.
📝 Abstract
Attributed Graph Clustering (AGC) is a fundamental unsupervised task that partitions nodes into cohesive groups by jointly modeling structural topology and node attributes. While the advent of graph neural networks and self-supervised learning has catalyzed a proliferation of AGC methodologies, a widening chasm persists between academic benchmark performance and the stringent demands of real-world industrial deployment. To bridge this gap, this survey provides a comprehensive, industrially grounded review of AGC from three complementary perspectives. First, we introduce the Encode-Cluster-Optimize taxonomic framework, which decomposes the diverse algorithmic landscape into three orthogonal, composable modules: representation encoding, cluster projection, and optimization strategy. This unified paradigm enables principled architectural comparisons and inspires novel methodological combinations. Second, we critically examine prevailing evaluation protocols to expose the field's academic monoculture: a pervasive over-reliance on small, homophilous citation networks, the inadequacy of supervised-only metrics for an inherently unsupervised task, and the chronic neglect of computational scalability. In response, we advocate for a holistic evaluation standard that integrates supervised semantic alignment, unsupervised structural integrity, and rigorous efficiency profiling. Third, we explicitly confront the practical realities of industrial deployment. By analyzing operational constraints such as massive scale, severe heterophily, and tabular feature noise alongside extensive empirical evidence from our companion benchmark, we outline actionable engineering strategies. Furthermore, we chart a clear roadmap for future research, prioritizing heterophily-robust encoders, scalable joint optimization, and unsupervised model selection criteria to meet production-grade requirements.